Funder Disclosure Protocols

When, How, What, and Why

35-minute read

Introduction

Transparency with funders about your organization's AI use is becoming a standard expectation and often a requirement. Foundations, government agencies, and corporate grantmakers increasingly ask questions about AI governance, request disclosure of AI use in grant-funded work, and require organizations to conduct bias audits and impact assessments of AI systems. Organizations that fail to disclose AI use when required face consequences ranging from grant denials to contract terminations to reputational damage.

Yet disclosure practices vary widely. Some organizations disclose all AI use proactively. Others wait until asked. Some funders require disclosure; others expect it but don't mandate it. This lesson provides frameworks for determining when disclosure is required, what to disclose, how to communicate effectively about AI use, and how to use disclosure as an opportunity to demonstrate organizational responsibility.

The Disclosure Imperative

Disclosing AI use isn't primarily about compliance—though compliance matters. Disclosure is fundamentally about transparency and trust. When nonprofits use AI without disclosing it, they risk stakeholder backlash if discovery occurs later. Communities served by nonprofits increasingly expect to know when AI influences decisions affecting them. Funders increasingly view AI governance transparency as an indicator of organizational maturity and accountability.

Progressive organizations view disclosure as an opportunity to demonstrate thoughtfulness about AI use rather than as a burden. Disclosure shows that you've thought carefully about how AI serves your mission, that you've considered risks, that you've implemented safeguards, and that you respect stakeholder rights to understand how AI may affect them.

When Disclosure Is Required

Disclosure is clearly required in several circumstances:

When Disclosure Is Expected (Though Not Required)

Even when not legally required or explicitly requested, many situations warrant voluntary disclosure:

A Disclosure Decision Tree

When uncertain whether to disclose, use this decision tree:

Disclosure Decision Framework

Question 1: Does the grant application ask about AI, technology, or decision-making processes?
If YES → Disclose all relevant AI use.
If NO → Proceed to Question 2.

Question 2: Does the grant explicitly fund work where you use AI?
If YES → Disclose.
If NO → Proceed to Question 3.

Question 3: Could not disclosing AI use appear deceptive if the funder discovered it later?
If YES → Disclose (better to be proactive).
If NO → You may proceed without disclosure, though voluntary disclosure remains advisable.

Question 4: Does the AI use raise significant questions about organizational values or mission alignment?
If YES → Disclose and explain how you've ensured alignment.
If NO → Disclosure is less critical, though still advisable.

What to Disclose

Disclosure doesn't require detailed technical explanation. Funders care about understanding what you're doing with AI, why you've chosen to use it, what safeguards you've implemented, and what you're monitoring. Effective disclosure includes:

Funder-Specific Disclosure Expectations

Different funder types often have different expectations regarding AI disclosure. Understanding these nuances helps you tailor disclosures appropriately.

Foundation Funders

Major foundations increasingly ask about AI use during grant applications. Progressive foundations ask about:

If foundations ask about these topics, answer comprehensively. If they don't, voluntary disclosure of rigorous AI governance can strengthen applications by demonstrating organizational sophistication and stewardship.

Government Funders

Federal government funding increasingly includes AI governance requirements. Many government grants now:

When applying for government funding, carefully review AI-related requirements in grant guidance. Many nonprofits have missed AI compliance requirements because they didn't read the fine print. Take AI requirements as seriously as other compliance obligations.

Corporate Funders

Corporate funders' expectations vary. Tech companies often welcome discussions of AI use. Other corporate funders may not have formal AI policies but increasingly care about responsible practice. When uncertain about a corporate funder's expectations, you can ask directly: "Are there any AI governance expectations you have for grant-funded work?"

Language Templates for Disclosure

Here are language templates you can adapt for various disclosure contexts:

General AI Use Disclosure (for grant applications)

Our organization uses artificial intelligence tools in the following ways: [list specific applications]. These applications include [specify whether AI generates content, informs decisions, analyzes data, etc.]. We implement the following safeguards to ensure responsible AI use: [describe governance, bias testing, data protection, and transparency measures]. All staff using AI tools receive training on organizational AI policies. We monitor AI system performance and regularly assess whether systems are delivering expected benefits while avoiding unintended harms. Communities affected by AI-informed decisions are informed of this use.
Bias Testing and Mitigation Disclosure

Where AI systems inform decisions affecting program participants, we have implemented bias testing protocols to identify potential discriminatory outcomes. Our testing examines whether AI systems produce disparate outcomes across protected classes. If bias is detected, we adjust systems, increase human review requirements, or discontinue AI use for that application. Our [date] bias audit found [describe findings]. We are implementing [describe actions] to address identified concerns.
Data Protection Disclosure

When using AI tools with sensitive data (health information, personally identifiable information, etc.), we use only enterprise-grade tools with appropriate security certifications and data protection agreements. Staff receive training on data protection requirements. We limit access to sensitive data to authorized personnel only. All AI use with sensitive data is logged and monitored for unauthorized access.
Transparency to Communities Disclosure

We are transparent with program participants and community members about our use of AI. [Describe how you communicate about AI use: e.g., "We include information about AI use in program intake materials." "We inform participants when AI assists in decision-making." "We provide information on our website about AI governance practices."]

Responding to Funder AI Questions

When funders ask specific questions about AI, resist the temptation to minimize or hedge. If you use AI, acknowledge it. If you don't have a practice the funder asks about, explain what you do instead or indicate it's an area you plan to develop. Funders appreciate honesty and thoughtfulness far more than defensiveness.

Example Responses to Funder Questions

Q: Do you use AI in grant-funded work?
A: Yes. Specifically, we use [list applications]. We've implemented [describe governance] to ensure responsible use.

Q: Have you tested your AI systems for bias?
A: Yes. We conducted bias testing on [specific systems] and found [describe findings]. We've implemented [corrective actions].

Q: What's your AI governance structure?
A: We have an organizational AI policy that [describe key elements]. Our [title] is responsible for oversight, with board-level review quarterly. We maintain an inventory of approved AI tools and regularly audit compliance.

Documentation for Disclosure

To support disclosure, maintain comprehensive documentation including:

This documentation demonstrates that disclosure is backed by real governance practices. It also helps you prepare responses quickly when funders ask questions.

Next: Protecting Sensitive Data with AI

Learn how to safely use AI tools when handling health information, financial data, and other protected information.

Start Lesson 6