Federal grant agencies have issued important guidance on AI governance that, while technically voluntary, effectively functions as a compliance requirement for organizations receiving federal funding. Unlike formal regulation, federal guidance operates through grant conditions, reporting requirements, and agency expectations. A nonprofit that ignores federal AI guidance risks losing funding, receiving negative audit findings, or facing grant non-compliance determinations.
The federal government's approach to AI governance emphasizes responsible AI development and deployment, risk management, and transparency. Federal agencies recognize that nonprofits serve as important vehicles for federal mission delivery, making nonprofit AI governance a matter of federal concern. The major guidance documents include OMB M-24-10 (Memorandum on Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence), NSF AI guidance, and agency-specific policies.
For nonprofits, understanding federal AI guidance is essential not only for direct compliance but also for partnering with institutions that receive federal funding. University partners, hospital research collaborators, and state/local government partners increasingly impose AI governance requirements on their nonprofit partners, often based on federal guidance they must follow.
Federal grant agencies increasingly impose AI governance expectations through funding conditions, even without formal regulation. Federal guidance documents create practical compliance obligations for nonprofits receiving federal funding, particularly grants for research, health, or education programs involving AI systems.
OMB M-24-10 implements the Executive Order on Advancing Artificial Intelligence, establishing requirements for federal agencies' use of AI. While the memorandum directly addresses federal agencies, it creates compliance obligations for federal contractors and grant recipients, particularly those implementing AI systems with federal funding.
AI Governance and Risk Management: Organizations receiving federal funding must establish governance structures for AI systems and conduct risk assessments. For nonprofits, this means documenting how they manage AI risks, who is accountable, and what processes exist for identifying and addressing problems.
Responsible AI Practices: The memorandum emphasizes responsible AI principles including fairness, transparency, accountability, and safety. Federal contractors must demonstrate adherence to these principles. For nonprofits, this translates to testing AI systems for bias, ensuring transparency about AI use, maintaining accountability for outcomes, and implementing safeguards against harm.
Impact Assessments: The memorandum requires impact assessments for AI systems that have the potential to meaningfully affect civil rights, civil liberties, or privacy. Nonprofits implementing AI in contexts affecting vulnerable populations must conduct these assessments and document how identified risks are addressed.
Transparency and Disclosure: Federal guidance emphasizes transparency about AI use. Nonprofits must disclose when AI systems are used in decisions affecting individuals, explain how systems work, and provide mechanisms for human review.
Monitoring and Evaluation: Ongoing monitoring of AI system performance is required. Rather than one-time risk assessment, federal guidance expects continuous monitoring, evaluation, and adjustment. Nonprofits must establish systems for tracking AI performance over time and making adjustments when issues are identified.
NSF has become a leader in establishing expectations for responsible AI in federally funded research. NSF requires disclosure of AI use in funded projects and expects researchers and their institutions to address responsible AI practices, including fairness, explainability, and bias mitigation.
For nonprofits receiving NSF funding, particularly in research, education, or STEM programs involving AI, guidance specifies that applicants must discuss how they will address responsible AI principles. This includes plans for bias testing, fairness evaluation, transparency, and governance. NSF reviewers increasingly evaluate responsible AI practices as part of proposal review, meaning nonprofits must demonstrate AI governance commitment to be competitive.
NSF also requires disclosure of AI tools used in funded projects, creating a formal requirement for transparency about AI applications. Nonprofits must document AI systems used, describe their capabilities and limitations, and explain how risks are managed.
NIH has issued guidance on the use of artificial intelligence, including large language models, in NIH-funded research. The guidance requires transparency about AI use, responsible practices to minimize bias and errors, and safeguards to protect research integrity.
For nonprofits involved in NIH-funded biomedical research, health services research, or public health programs, the guidance creates practical obligations. NIH expectations include: documenting all AI tools used in research, explaining how their use contributes to research validity, describing steps taken to verify accuracy and minimize bias, and disclosing limitations of AI-generated content.
NIH guidance particularly emphasizes transparency in publication. Research papers describing studies involving AI must disclose AI use, acknowledge AI system limitations, and explain how findings remain scientifically valid despite AI involvement.
The U.S. Department of Agriculture has established requirements for AI governance in programs it funds, particularly in agricultural research, rural development, and food security. USDA expects grant recipients to demonstrate responsible AI practices, including transparency, fairness, and compliance with civil rights laws.
For nonprofits in the food security, agricultural, or rural development space receiving USDA funding, compliance includes demonstrating that AI systems don't discriminate against particular agricultural producers or communities, ensuring transparency about how AI influences funding or program decisions, and maintaining accountability for AI system outcomes.
OMB Circular A-110 establishes administrative requirements for federal grants, including requirements for financial management, project management, and compliance. While not specifically about AI, A-110 requires grant recipients to maintain documentation of how federal funds are used and to demonstrate management practices consistent with sound business practices.
For nonprofits using federal funding to support AI system development or implementation, A-110 compliance means maintaining documentation of AI system development, testing, and governance. Auditors evaluating A-110 compliance increasingly scrutinize AI governance practices, asking whether organizations have adequate processes for managing AI risks and ensuring responsible use.
This creates practical compliance requirements: nonprofits must document AI governance decisions, maintain records of risk assessments and testing, track compliance activities, and be prepared to demonstrate to federal auditors that AI governance meets federal expectations.
Beyond the major federal guidance documents, individual funding agencies often impose specific AI requirements in grant solicitations. These requirements may exceed general federal guidance and establish compliance obligations specific to particular funding streams.
Nonprofits should review funding announcements carefully, identifying any AI-related requirements. Common requirements include: describing how AI will be used in the funded project, addressing responsible AI principles, identifying staff responsible for AI governance, establishing timelines for impact assessment or fairness testing, and committing to transparency and monitoring.
The most competitive proposals increasingly include detailed responsible AI plans, demonstrating organizational commitment to governance. Nonprofits that integrate AI governance into every federal grant application gain competitive advantages, showing funders they take responsible AI seriously.
Nonprofits collaborating with universities, hospitals, or government agencies increasingly face AI compliance requirements imposed by their partners. Universities typically have Institutional Review Boards (IRBs) or AI governance committees that review research involving AI and impose compliance conditions. Hospitals have compliance offices that audit AI use. State and local governments increasingly impose AI governance requirements on nonprofit partners.
For nonprofits in research or service partnerships, understanding partner institutions' AI requirements is essential. Partners will increasingly ask: Does your organization have AI governance? Have you conducted fairness testing? Can you demonstrate responsible practices? Nonprofits prepared with documented governance practices can answer these questions confidently and maintain critical partnerships.
Review your organization's recent federal grants. Identify which agencies provided funding and what AI governance requirements those agencies impose. Research the funder's AI guidance documents and requirements. For each grant involving AI systems, create a compliance checklist documenting: (1) funder AI guidance applicable to the grant; (2) specific requirements in the grant solicitation; (3) current compliance status; (4) gaps and timeline for remediation. Build this compliance documentation into your grant administration practices.
A nonprofit biomedical research organization received a three-year, $2.5 million NIH research grant to develop AI tools for early disease detection. The organization had limited prior experience with formal AI governance and didn't initially recognize the federal compliance obligations embedded in the grant award.
The grant included expectations for responsible AI development, documented by NIH guidance. During the first-year progress report, NIH asked the organization to describe their responsible AI practices, including bias testing and fairness evaluation procedures. The organization realized it had conducted little formal evaluation of their AI system's performance across different demographic groups.
To address the compliance gap, the organization: (1) engaged an external AI auditor to conduct fairness testing across ethnic and age groups; (2) implemented quarterly fairness monitoring procedures; (3) established a governance committee including clinicians and ethicists to oversee AI system use; (4) documented all AI governance activities; and (5) incorporated bias testing into their research protocols.
The compliance effort required additional resources but aligned with the organization's mission of developing trustworthy AI tools. The organization discovered that their initial AI development, while technically sophisticated, had not adequately considered fairness and equity. Addressing these issues strengthened their research and made their tools more reliable across populations.
The organization now uses federal AI governance requirements as a starting point for all projects, recognizing that responsible AI practices strengthen rather than constrain research quality. They've communicated this perspective to funders and partners, positioning themselves as leaders in responsible AI research.
Federal compliance requires maintaining documentation demonstrating governance activities. Nonprofits should maintain records including:
This documentation serves multiple purposes: it demonstrates compliance to funders and auditors, provides evidence that the organization takes responsible AI seriously, supports institutional learning and continuous improvement, and protects the organization if issues arise.
Federal audits of grants increasingly examine AI governance practices. Nonprofits should maintain audit trails demonstrating how AI governance decisions were made and monitored. For example, if an organization discontinued use of an AI system, documentation should explain why, what issues prompted the decision, and what alternative approaches were implemented.
Transparency about AI governance strengthens audit outcomes. Organizations that proactively document governance activities, acknowledge limitations and challenges, and demonstrate continuous improvement through monitoring typically receive favorable audit findings. Organizations that appear to have ignored or minimized AI governance face scrutiny and potential non-compliance findings.
Nonprofits sometimes assume that federal AI guidance is purely advisory and doesn't require immediate compliance action. In reality, federal grant conditions create binding obligations. Failure to comply with federal AI guidance creates audit risk, grant non-compliance exposure, and potential requirement to return funding. Nonprofits should treat federal AI guidance as creating practical, enforceable obligations.
Nonprofits should integrate federal AI compliance into their standard grant administration and compliance processes. Best practices include:
Federal grant agencies have established clear expectations for responsible AI governance through guidance documents and funding conditions. While technically voluntary, these expectations function as practical compliance requirements for nonprofits receiving federal funding. By understanding federal AI guidance, incorporating it into organizational policies, and maintaining documentation of governance activities, nonprofits can demonstrate compliance to funders while strengthening their own AI governance practices.
Join hundreds of nonprofit leaders completing the CAGP Level 4 certification in AI governance and strategy.
Enroll Now