Here's a scenario playing out at nonprofits right now: a grant writer uses ChatGPT to draft a needs statement, copies it into the proposal with light edits, and submits it to a foundation that has just adopted an AI disclosure requirement. Nobody told the writer about the disclosure policy. Nobody at the organization has discussed what constitutes "AI-generated" content. There's no documentation of what was drafted by AI versus what was written by humans. And now the organization's credibility is at risk — not because it used AI, but because it used AI without governance.

Over 90% of nonprofits now use AI tools in some operational capacity. In grant writing specifically, adoption has exploded: ChatGPT for drafting, Grammarly's AI for editing, specialized tools for research and data analysis. Yet fewer than 15% of nonprofits have a formal policy governing how AI is used in their grants work. The gap between adoption and governance is a credibility time bomb.

This guide provides the framework and a customizable template for building an AI grants policy that protects your organization, empowers your staff, and positions you as a responsible AI adopter in a rapidly evolving funder landscape.

Why Every Grants Team Needs an AI Policy Now

Three converging forces make this urgent rather than merely important.

90%+

Of nonprofits now use AI tools in some capacity, but fewer than 15% have formal policies — creating a governance gap that grows wider with every new tool adoption.

Funders Are Asking

NIH now requires disclosure of AI involvement in grant applications and considers AI-generated content as not constituting original ideas. NSF has implemented disclosure requirements. A growing number of private foundations are adding AI-related questions to their applications or updating their guidelines to address AI use. When a funder asks whether AI was involved in your proposal and your answer is "we don't know because we don't track that" — you have a problem.

Data Protection Is at Risk

Every time a staff member pastes proposal content into ChatGPT, that data becomes part of a third-party system. Depending on the tool's terms of service, that content might be used for model training, stored indefinitely, or accessible to the tool provider's staff. Grant proposals routinely contain sensitive information: funder relationship details, budget assumptions, beneficiary demographics, organizational vulnerabilities, and strategic plans. Without a policy that defines what can and cannot be entered into AI tools, your organization has no control over this data exposure.

Staff Need Clarity, Not Ambiguity

Your grant writers want to use AI effectively. They also don't want to get their organization in trouble. Without clear guidelines, they're making individual judgment calls on questions that should be organizational decisions: Is it okay to have AI draft a logic model? Should I mention to the program officer that I used AI for research? Can I upload our budget template to an AI tool for formatting help? A policy answers these questions once, for everyone, consistently.

The 7 Components of a Strong AI Grants Policy

Permitted Uses

Define specifically what AI can be used for in your grants work. Be concrete rather than abstract. Good examples: "AI may be used for brainstorming initial approaches to narrative sections, researching funder backgrounds, checking grammar and readability, generating first drafts that will be substantially revised by staff, and analyzing data for needs statements." Specificity prevents both overuse and underuse — staff know exactly where AI fits in their workflow.

Prohibited Uses

Equally important: what AI must never be used for. Examples: "AI must not be used to fabricate statistics, citations, or organizational data. AI must not generate final proposal text without substantial human revision. AI-generated content must not be submitted as original work without review and modification. AI must not be used to misrepresent organizational capacity or experience." This component protects both integrity and credibility.

Data Protection Rules

Specify what information can and cannot be entered into AI tools. A simple three-tier framework works well: Green (safe to enter) — publicly available information, general grant writing questions, published data. Yellow (enter with caution) — draft narrative text without identifying details, general budget categories. Red (never enter) — beneficiary names or identifying information, specific funder relationship notes, proprietary budget details, board member information, passwords or access credentials.

Quality Assurance Requirements

Every piece of AI-generated content used in a proposal must go through a defined review process. At minimum: a human must verify all facts, statistics, and citations (AI tools fabricate references); a subject matter expert must confirm technical accuracy; the final proposal must be reviewed for voice consistency and organizational authenticity; and someone other than the drafter should read AI-assisted sections for quality.

Disclosure Standards

Define when and how your organization discloses AI involvement to funders. At minimum: always disclose when a funder has an explicit AI policy, be prepared to answer honestly if asked, and document AI involvement in internal records for every proposal. Provide staff with approved disclosure language so they don't have to improvise.

Documentation Requirements

Maintain an audit trail of AI use in every proposal. This doesn't need to be onerous — a simple log noting which sections involved AI assistance, what tools were used, and who reviewed and approved the final content. This documentation protects the organization if a funder asks about AI use and enables consistent practice tracking.

Governance and Review

Designate who owns the policy, how frequently it's reviewed (at least annually given the pace of change), and how staff can request exceptions or propose updates. Include an escalation path for situations the policy doesn't cover — because new AI capabilities and funder requirements will create scenarios nobody anticipated.

Policy Template You Can Customize

AI Use in Grant Writing — Policy Template

Purpose
This policy establishes guidelines for the responsible use of artificial intelligence tools in [Organization Name]'s grant writing and fundraising activities. It ensures that AI use enhances our work while maintaining integrity, protecting sensitive data, and complying with funder requirements.
Scope
This policy applies to all staff, contractors, and volunteers who participate in grant writing, proposal development, funder research, or grant reporting activities using any AI-powered tool, including but not limited to ChatGPT, Claude, Gemini, Copilot, Grammarly AI, and specialized grant writing platforms.
Permitted Uses
AI tools may be used for: brainstorming and outlining narrative sections; researching funder backgrounds and priorities; improving grammar, readability, and structure of human-written text; generating initial drafts that will be substantially revised; data analysis for needs statements; and formatting and document preparation. All AI-assisted content must be reviewed, verified, and approved by a staff member before inclusion in any proposal.
Prohibited Uses
AI tools must not be used to: fabricate statistics, citations, research findings, or organizational data; generate final proposal text submitted without substantial human revision; misrepresent organizational capacity, experience, or endorsements; circumvent funder restrictions on AI use; or process any Red-tier data as defined in the Data Protection section.
Data Protection
GREEN (may enter): publicly available information, general questions, published research. YELLOW (enter with caution, remove identifying details): draft narrative text, general budget frameworks, anonymized program descriptions. RED (never enter): beneficiary PII, specific funder relationship notes, proprietary financial details, board member information, passwords or credentials, embargoed or confidential funder communications.
Disclosure
Staff must disclose AI involvement when: the funder has an explicit AI policy, the application includes a question about AI use, or staff are directly asked. Approved disclosure language: "[Organization Name] uses AI tools to support research, drafting, and editing processes. All proposal content reflects our original ideas, organizational knowledge, and programmatic expertise, and has been reviewed and validated by our staff."
Review
This policy is owned by [Title] and will be reviewed [quarterly/semi-annually/annually]. Staff may propose updates or request exceptions by contacting [designated person]. Effective date: [Date]. Next review: [Date].

Training Your Team on Compliant AI Use

A policy without training is just a document nobody reads. Effective training takes two hours and covers four areas.

First, walk the team through the policy itself — not by reading it aloud, but by applying it to real scenarios. "You're drafting a needs statement and want to use ChatGPT. Walk me through the steps." This surfaces gaps in understanding and makes the policy concrete.

Second, demonstrate the data protection tiers with real examples from your work. Show what happens when you paste a funder's confidential letter into ChatGPT versus when you ask a general question about grant writing best practices. Make the risk tangible.

Third, practice disclosure. Role-play a conversation where a program officer asks whether AI was involved in your proposal. Staff need to be comfortable with the approved language and confident that disclosure is a strength, not a weakness.

Fourth, show the documentation system. If logging AI use feels like extra work, nobody will do it. Make it as frictionless as possible — a single checkbox or dropdown in your proposal tracking system, not a multi-field form.

Audit Trail: Documenting AI Involvement

Documentation serves two purposes: it protects the organization if a funder asks about AI use, and it builds institutional knowledge about how AI contributes to your grants work over time.

Keep it simple. For each proposal, record: which AI tools were used, which sections involved AI assistance (research, drafting, editing, formatting), who reviewed and approved the AI-assisted content, and whether the funder was notified of AI involvement. A single row in a spreadsheet or a note field in your grants management system suffices. The goal is a reliable record, not a research study.

67%

Of foundations have no official AI policy yet — but that number is shrinking rapidly. Organizations that document their AI practices now will be prepared when disclosure requirements inevitably expand.

Updating Your Policy as Regulations Evolve

AI governance in philanthropy is evolving monthly. Federal agencies are tightening requirements. State legislatures are introducing AI disclosure laws. Foundation associations are developing sector-wide guidelines. Your policy needs a mechanism for staying current.

Designate one person to monitor AI policy developments in the grants sector — a 30-minute monthly scan of relevant publications and funder announcements. Schedule formal policy reviews at least annually, or more frequently if major regulatory changes occur. Build flexibility into the policy by using principles-based language for core commitments (integrity, transparency, data protection) and specific-but-updatable language for implementation details (tool lists, tier definitions, disclosure templates).

The organizations that will navigate the AI transition successfully aren't the ones that avoid AI or the ones that adopt it uncritically. They're the ones that use it thoughtfully, govern it clearly, and stay ahead of the curve on funder expectations. Your policy is the foundation for all three.

Navigate AI Responsibly, Together

grants.club helps your team stay current on AI best practices, funder policies, and responsible technology use in grants.

Join grants.club