Two-thirds of foundations have no official policy on AI in grant proposals. But program officers — the people who actually read your applications — already have strong opinions. They're developing detection instincts. They're talking to each other. And they're forming judgments that will eventually become formal policies. Understanding what funders think about AI right now gives you a strategic advantage that won't last forever.
This article compiles funder perspectives across federal agencies, private foundations, and corporate giving programs — the most comprehensive cross-sector analysis available. It covers not just official policies but the informal conversations, emerging concerns, and evolving attitudes that will shape the next generation of funder requirements.
NIH's Hardline Stance
The National Institutes of Health has staked out the most restrictive position among major federal funders, and their stance is influencing policy discussions across the philanthropic sector. NIH's core position is unambiguous: AI-generated content is not considered to represent original scientific ideas.
In practice, this means that research proposals submitted to NIH must reflect the genuine intellectual contribution of the named investigators. AI tools may be used for editing, literature review, or data analysis, but the scientific hypotheses, research design, and methodological choices must originate from human researchers. NIH has also implemented a six-application cap per principal investigator, motivated partly by concerns that AI could enable mass-production of proposals — flooding review panels with volume at the expense of quality.
Application cap per investigator now enforced by NIH — partly in response to AI-enabled mass applications. The cap signals a broader concern: that AI could undermine the integrity of competitive review by enabling volume over quality.
For nonprofit research organizations, the NIH stance has practical implications. Document your research team's intellectual process. Keep records showing how hypotheses developed through genuine inquiry. If you use AI for literature review or data synthesis, be prepared to demonstrate that the scientific thinking is yours.
NSF's Disclosure-First Approach
The National Science Foundation has taken a more nuanced position than NIH, focusing on disclosure rather than prohibition. NSF requires applicants to disclose the use of AI tools in proposal preparation, treating transparency as the primary obligation while leaving room for legitimate AI-assisted work.
NSF's approach reflects a pragmatic recognition that AI tools are already embedded in scientific workflows — from literature search tools to data analysis platforms to writing assistants. Attempting to ban AI entirely would be impractical and potentially counterproductive. Instead, NSF's framework asks researchers to be transparent about how they used AI and to take responsibility for the accuracy and integrity of all content in their proposals.
This disclosure-first model is likely to become the template for other federal agencies. It strikes a balance between acknowledging AI's usefulness and maintaining the principle that proposals represent authentic human expertise. For applicants, the practical takeaway is straightforward: use AI thoughtfully, and be honest about it.
Foundation Perspectives: The Full Spectrum
Private foundations span a wider spectrum of AI attitudes than federal agencies — from enthusiastic early adopters to deeply skeptical traditionalists. Understanding where specific funders fall on this spectrum is part of the relationship intelligence that separates strategic applicants from transactional ones.
AI as Innovation
A minority of forward-leaning foundations actively encourage responsible AI use, viewing it as evidence of organizational innovation and capacity. These funders may even offer grants specifically for AI adoption.
Wait and See
The majority (67%) of foundations haven't formalized a position. Program officers are individually developing opinions and detection skills while leadership deliberates on formal policy.
Human-Only
A growing number of foundations are signaling that AI-generated content is unwelcome — viewing it as inconsistent with the personal relationships and authentic voice they value in grantee partnerships.
The critical insight is that even foundations without formal policies are forming opinions through practice. Program officers who read hundreds of proposals are developing intuitive detection capabilities and personal preferences. By the time formal policies emerge, the informal culture around AI will already be established. Organizations that adopted responsible AI practices early will be well-positioned regardless of where policies land.
What Program Officers Detect and Care About
Conversations with program officers across multiple foundation types reveal a consistent pattern: they care far less about whether you used AI than about whether your proposal reflects genuine organizational expertise and authentic community connection.
What They Notice
Program officers report noticing a characteristic quality in AI-generated proposals: technically competent language that lacks specificity. The needs statement cites national statistics but doesn't describe the specific community. The program design follows textbook frameworks but doesn't reflect the messy, adaptive reality of on-the-ground work. The evaluation plan is methodologically sound but disconnected from how the organization actually learns and improves.
"I can't always prove a proposal was AI-generated. But I can tell when a proposal doesn't know me — when it reads like it could have been sent to any funder. The best proposals make me feel like the applicant understands what I'm trying to accomplish. AI doesn't do that."
What They Care About
The consistent message from program officers is that they value authenticity, specificity, and relationship context — exactly the qualities that AI struggles to produce. They want to see evidence of genuine community engagement, not polished demographic summaries. They want to understand how your organization has adapted and learned from experience, not how your logic model follows best practice templates. They want to feel the human expertise behind the proposal.
This doesn't mean AI can't be part of the process. It means AI should enhance human expertise rather than replace it. A proposal where AI helped synthesize research data but a human researcher interpreted its implications will read very differently from one where AI generated the entire analysis.
The Uncanny Valley of AI Proposals
Several program officers described an "uncanny valley" effect in AI-generated proposals — an uncomfortable feeling that something is off without being able to identify exactly what. The writing is smooth. The structure is logical. The arguments are reasonable. But the proposal feels hollow, as if a competent stranger wrote about an organization they'd never visited.
The uncanny valley is most apparent in sections that require authentic voice: community narratives, organizational history, stakeholder testimonials, and the kind of programmatic detail that can only come from experience. AI can generate plausible versions of these sections, but plausibility is not the same as authenticity — and experienced reviewers feel the difference.
The organizations navigating this most successfully are those that use AI for analytical and structural tasks while protecting the sections that require authentic human contribution. They let AI help with research synthesis, compliance checking, and editorial polish, while ensuring that every narrative section, community description, and strategic argument comes from people who know the work intimately.
Being Transparent Without Undermining Your Application
The fear that AI disclosure will hurt your application is understandable but increasingly unfounded. Funders who value transparency (and that's most of them) respect organizations that are honest about their tools and processes. What undermines applications isn't AI use — it's dishonesty about AI use, or AI use that produces generic content.
Disclosure Done Well
When disclosing AI use, frame it as part of your organizational capacity — not as a confession. A brief statement noting that your team uses AI tools for research, data analysis, and editorial review, while emphasizing that all programmatic content reflects your organization's direct experience and expertise, signals competence and integrity simultaneously.
Position your AI use as thoughtful rather than expedient. If you can explain why you used AI for specific tasks and how human judgment guided every decision, you've demonstrated exactly the kind of responsible innovation that forward-thinking funders respect. You've also differentiated yourself from organizations that use AI carelessly — which is becoming an increasingly relevant competitive advantage.
Where Funder AI Policies Are Heading
Based on current trajectories, several predictions are likely to hold through 2026-2028. Disclosure requirements will become standard across most federal agencies and an increasing number of private foundations. Detection capabilities will improve as funders invest in tools and training for identifying AI-generated content. AI-specific evaluation criteria will emerge, with reviewers explicitly assessing whether proposals reflect genuine expertise versus AI-generated templates.
The most significant shift may be cultural rather than policy-based. As AI becomes ubiquitous, the competitive advantage will shift from "using AI" (which everyone will do) to "using AI wisely" — producing proposals that combine AI's analytical power with irreplaceable human expertise, community knowledge, and authentic funder relationships. The organizations that invested in building genuine voice, deep community connections, and honest funder partnerships will find that these assets become more valuable, not less, in an AI-saturated landscape.
Navigate the AI Landscape With Confidence
grants.club provides AI-powered grant discovery and writing tools built specifically for responsible use — with funder intelligence to keep you ahead of evolving policies.
Explore Smart Grant Tools