Why AI Governance Matters

The Risk and Liability Landscape

30-minute read

Introduction

Artificial intelligence has become ubiquitous in nonprofit operations. From AI-powered grant writing assistants to chatbots handling donor inquiries, from data analysis tools predicting program outcomes to algorithms optimizing resource allocation, AI is woven into the fabric of how nonprofits operate. Yet despite this widespread adoption, a critical gap persists: while 92% of nonprofits now use AI in some capacity, fewer than 10% have formal governance policies in place to manage its use.

This disconnect creates substantial risks for organizations, their stakeholders, and the communities they serve. Without clear governance structures, nonprofits expose themselves to liability issues, reputational damage, compliance violations, and governance failures that can undermine mission effectiveness and funder trust.

92%
of nonprofits now use AI in operations
<10%
have formal AI governance policies

The Governance Gap: Why It Matters

The governance gap represents one of the most pressing challenges facing nonprofit leaders today. This gap is not simply a matter of policy paperwork—it reflects a fundamental misalignment between the pace of AI adoption and the organizational infrastructure needed to manage it responsibly.

What Creates the Gap?

Several factors drive this gap. First, the speed of AI innovation far outpaces organizational decision-making processes. By the time a nonprofit's board discusses AI governance, new tools have already been integrated into programs. Second, many nonprofit leaders view AI governance as a technical issue best left to IT departments, failing to recognize it as a strategic governance matter requiring board-level oversight. Third, few organizations have dedicated resources or expertise to develop comprehensive AI policies from scratch.

The governance gap reflects a more fundamental issue: many nonprofit leaders are uncertain about what AI governance should include, what risks warrant attention, and where their fiduciary responsibilities begin and end. Without clarity, organizations default to allowing individual departments to make independent AI adoption decisions, creating inconsistent practices, unmanaged risks, and potential compliance failures.

Key Insight

AI governance is not about restricting AI use or slowing innovation. Rather, it's about creating intentional, accountable frameworks that enable organizations to harness AI's benefits while managing risks systematically and transparently.

Liability and Legal Risk

Without AI governance, nonprofits face multiple layers of legal liability. These risks span data protection, intellectual property, discrimination law, and professional liability.

Data Protection and Privacy Liability

When nonprofits use AI tools to process donor information, client records, or program participant data, they assume legal responsibility for how that data is handled. Many AI tools—particularly free or consumer-grade platforms—transmit data to third-party servers for processing. This creates privacy and compliance violations, particularly when the data includes protected health information (HIPAA), education records (FERPA), or personally identifiable information (PII) of vulnerable populations.

Without clear policies prohibiting the use of certain tools with sensitive data, individual staff members may inadvertently expose the organization to regulatory fines. For example, a nonprofit using ChatGPT to summarize confidential client cases creates HIPAA liability, even if the data is not intentionally shared publicly. Under HIPAA regulations, the nonprofit is liable for unauthorized disclosures regardless of user intent.

Discrimination and Bias Liability

AI systems can embed historical biases into decision-making processes. When nonprofits use AI for hiring, program eligibility determination, or resource allocation, biased algorithms can lead to discriminatory outcomes that violate Title VII of the Civil Rights Act or Americans with Disabilities Act (ADA) protections. Unlike human decisions that might be evaluated for intent, algorithmic discrimination can be identified purely through disparate impact—demonstrating that an AI system produces discriminatory outcomes, regardless of intent.

A nonprofit using AI to screen grant applications could inadvertently eliminate applicants from particular geographic areas or demographic groups if the training data reflects historical biases. Even if the organization's intent is neutral, the disparate impact creates legal liability and potential damages claims.

Intellectual Property and Copyright Risks

When nonprofits use generative AI tools to create content—grant proposals, marketing materials, program curricula—they may inadvertently use content that infringes on copyright protections or violates creator rights. Generative AI models train on massive datasets, including copyrighted material, and may reproduce substantial portions of that material in their outputs. Organizations using AI-generated content without understanding these risks may face takedown notices, cease-and-desist letters, or litigation.

Reputational Risk

Reputational damage from AI missteps can be swift and severe. In an era of social media transparency, stories of nonprofits using biased AI systems, mishandling donor data, or deploying inappropriate AI applications spread rapidly and damage stakeholder trust.

Consider a nonprofit that uses AI to generate grant proposals submitted to a foundation. If the foundation discovers that proposals were AI-generated without disclosure, it may question the authenticity of the organization's grant applications, question the organization's integrity, and decline future funding applications from that organization. In nonprofit ecosystems where reputation is currency, such damage can have cascading effects across multiple funding relationships.

Real-World Example

A mid-sized education nonprofit used AI to generate fundraising appeals without conducting bias audits. The AI system produced messaging that disproportionately portrayed low-income families as helpless, reinforcing deficit narratives that contradicted the organization's stated values. When supporters discovered the messaging, social media criticism forced the organization to issue a public apology and overhaul its AI governance. The incident damaged relationships with community partners and key donors.

Funder Expectations and Grant Compliance

Foundations, government agencies, and corporate grantmakers increasingly ask nonprofits about their AI governance practices. Major funders including the Ford Foundation, MacArthur Foundation, and Gates Foundation have begun incorporating AI governance questions into grant applications and compliance monitoring.

Some funders require explicit disclosure of AI use in grant-funded work. Others ask whether organizations have policies for responsible AI use. Still others require nonprofits to conduct bias audits of AI systems used in their work. Organizations without AI governance policies struggle to answer these questions, which can result in grant denials, reduced funding amounts, or increased compliance monitoring.

Additionally, government contracts—increasingly important for many nonprofits—include specific AI governance requirements. Federal contractors, for example, must comply with Executive Order 14110 on Safe, Secure, and Trustworthy AI, which requires transparency in AI use and bias mitigation for AI systems in high-impact settings.

Board Accountability and Fiduciary Duty

Nonprofit boards have fiduciary duties to exercise reasonable oversight of organizational operations, including emerging technologies. The duty of care requires that board members stay informed about significant operational risks. The duty of loyalty requires that board decisions prioritize the organization's mission and interests above all else. The duty of obedience requires compliance with relevant laws and regulations.

AI governance intersects all three duties. Without AI governance policies, boards arguably fail in their duty of care by not exercising reasonable oversight of AI-related risks. They may fail in their duty of loyalty by allowing AI adoption decisions that prioritize convenience over mission alignment. They may fail in their duty of obedience by permitting non-compliant AI use.

In litigation, courts increasingly examine whether board-level oversight occurred regarding significant operational risks. An organization that suffered damages from AI-related incidents without evidence of board-level AI governance discussions could face breach of fiduciary duty claims against individual board members.

Insurance Implications

Insurance underwriters are beginning to question AI governance practices when evaluating nonprofit coverage. Directors and officers (D&O) insurance policies, which protect board members and executives from personal liability, increasingly include exclusions or higher premiums for organizations without documented AI governance.

Cyber liability insurance—which covers data breaches and privacy violations—may exclude coverage for incidents involving unmanaged AI systems or violations of emerging AI standards. General liability policies may exclude coverage for AI-generated content or algorithm-driven discrimination.

As AI governance becomes standard practice, organizations without formal policies may find coverage unavailable or prohibitively expensive, effectively creating a financial penalty for governance gaps.

Action Item

Review your organization's current insurance policies with your broker. Ask specifically whether the policies cover AI-related incidents, whether coverage excludes AI-generated content, and whether the carrier has requirements for AI governance. Document responses for board review.

The Governance Imperative

AI governance is not optional for nonprofit leaders—it is an essential leadership responsibility. Organizations that establish clear, accountable AI governance frameworks demonstrate responsible stewardship, protect their missions, earn stakeholder trust, and position themselves to realize AI's benefits while managing its risks.

This course equips you with the knowledge, frameworks, and tools to build AI governance practices appropriate for your organization's context, culture, and risk profile. The following lessons will guide you through developing comprehensive AI policies, establishing board-level oversight structures, creating risk assessment frameworks, and implementing transparent disclosure protocols.

Ready to Build Your AI Governance Framework?

Continue to the next lesson to explore the core components of organizational AI policy.

Start Lesson 2