What Boards Need to Know About AI
Nonprofit boards are responsible for ensuring organizational stewardship, strategic direction, and accountability. Increasingly, this includes oversight of artificial intelligence. As AI becomes embedded in nonprofit operations—from grant writing to program delivery to financial management—boards must understand AI governance as a core fiduciary responsibility, not a technical detail to delegate entirely to staff.
This lesson explores what board-level AI governance looks like, what questions boards should ask, what decisions boards should make, and how boards can provide meaningful oversight without requiring deep technical expertise. We'll also examine board resolution templates that formalize AI governance commitments.
Some board members feel that AI is too technical for board-level discussion, viewing it as an IT issue. This perspective misses the governance reality. AI governance is fundamentally about:
These are quintessentially board-level concerns. AI governance is a governance issue that happens to involve technology, not a technology issue that happens to involve governance.
Courts and regulators increasingly examine whether boards exercised reasonable oversight of emerging risks. A board that failed to discuss AI governance while 92% of staff use AI tools could be viewed as failing in its duty of care. Board minutes documenting AI governance discussions demonstrate that boards took their oversight responsibilities seriously.
Boards don't need to understand how AI systems work internally. They do need to ask strategic questions that help assess whether AI is being used responsibly. Key questions include:
Where is AI being used in our organization? Does each application advance our mission, or are we using AI primarily for efficiency without considering mission impact? Are there AI uses that might inadvertently distort our mission focus? For example, could using AI to optimize fundraising efficiency cause us to prioritize funding sustainability over program quality?
Have we conducted a comprehensive assessment of AI-related risks? What are our highest-risk AI applications? What safeguards do we have in place for high-risk uses? What would happen if a critical AI system failed? What liability could we face if AI systems behaved inappropriately or produced biased results?
Do we have clear policies governing AI acquisition and use? Who is authorized to approve new AI tools? Who is responsible for monitoring AI performance? How do we ensure accountability if problems occur? Have we assigned leadership responsibility for AI governance?
Are we using AI in ways that could create biased or inequitable outcomes? If AI is used for grant writing, program eligibility, hiring, or resource allocation, have we tested these systems for bias? Do we have processes to detect and correct bias if it occurs?
What safeguards protect sensitive organizational and stakeholder data when we use AI systems? Are we using enterprise tools with appropriate security, or consumer tools that may expose data? Have we experienced any data breaches related to AI tools? What compliance requirements apply to our AI use?
Do we disclose our AI use to grant funders as required? Do we tell community members when AI influences decisions affecting them? Are we transparent about the limitations and risks of AI in our programs? Could lack of disclosure damage stakeholder relationships if discovered?
Does our staff have adequate training and expertise to use AI responsibly? Have we provided training on our AI policies? Do staff members understand which AI tools are approved and which are prohibited? Do we have sufficient expertise internally, or do we need to hire or consult with outside experts?
Have we reviewed our insurance policies to understand coverage for AI-related incidents? What compliance requirements apply to our AI use (HIPAA, GDPR, ADA, state AI disclosure laws)? Are we in compliance with these requirements? Do we have incident response procedures if something goes wrong?
Boards can address AI governance in several ways. Some establish a dedicated technology or AI committee. Others address AI through existing committees (governance committee, audit committee, or risk committee). Some organizations with smaller boards address AI in full board discussions.
If your organization establishes a dedicated technology or AI committee, include board members with diverse expertise—not just those with technical backgrounds. You want perspectives from governance, mission, finance, and program areas. The committee should:
Organizations without dedicated technology committees often integrate AI governance into the governance/board relations committee, which typically oversees policies, compliance, and organizational effectiveness. This committee can:
Boards should receive regular reports on AI governance status. Annual board agendas should include:
Quarterly committee reports to the full board keep all directors informed and create accountability. Board minutes should document discussions of significant AI decisions, demonstrating that governance was addressed.
Not all AI decisions require board approval. The board should focus on high-level governance, approving policies and addressing significant risks, while delegating operational decisions to management. A useful framework distinguishes:
Board Approves: AI governance policies, AI budget allocations above specified thresholds, responses to significant AI-related incidents or risks, major changes to AI strategy
Board Is Informed: Quarterly AI governance status, new AI tools approved under existing policies, staff training completion, funder disclosures
Management Decides: Operational approval of new AI tools within policy limits, staff training implementation, day-to-day policy compliance monitoring
Many boards formalize their commitment to AI governance through a board resolution. A resolution demonstrates organizational commitment, clarifies authority and responsibility, and creates accountability. Here's a template boards can adapt:
RESOLVED, that the Board of Directors of [Organization Name] adopts the following AI Governance Policy and commits to its implementation and ongoing oversight:
WHEREAS, the Board recognizes that artificial intelligence systems are increasingly used throughout the organization in support of mission delivery, operations, and strategic objectives;
WHEREAS, the Board acknowledges the potential benefits of responsible AI use while recognizing the governance, compliance, and risk management responsibilities that accompany AI deployment;
WHEREAS, the Board commits to ensuring that AI use aligns with organizational values, complies with applicable laws and regulations, and maintains stakeholder trust;
NOW, THEREFORE, BE IT RESOLVED:
1. The Board adopts the Organizational AI Governance Policy as presented, which establishes principles, permitted and prohibited uses, tool approval processes, data handling requirements, and oversight mechanisms for AI systems.
2. The Board designates [Title] as responsible for day-to-day AI governance implementation and grants authority to approve new AI tools within parameters specified in the policy.
3. The Board designates the [Committee] as responsible for oversight of AI governance, receipt of quarterly reports, and recommendation of policy updates.
4. The Board commits to annual review of this policy, assessment of AI-related risks, and discussion of emerging AI governance issues at board meetings.
5. Implementation of this policy shall begin [date], with full compliance expected by [date].
Boards need not include technical experts to provide meaningful AI oversight. What boards do need is sufficient literacy to understand risks, ask informed questions, and make governance decisions. Consider:
Board members don't need to understand how neural networks function. They do need to understand that AI systems can embed bias, that AI governance is a fiduciary responsibility, and what questions to ask to assess whether the organization is managing AI appropriately. Focus board AI conversations on governance and risk management, not on technical explanations.
Organizations frequently encounter predictable challenges in board AI governance. Anticipating them helps you navigate them effectively:
Solution: Insist that AI governance discussions use plain language. If staff or consultants use unexplained technical jargon, ask them to explain in terms board members understand. Good governance requires clear communication.
Solution: Provide regular, digestible education. Brief case studies, short videos, or quarterly expert panels help build collective understanding. Acknowledge that AI is complex and evolving, and frame ongoing learning as appropriate.
Solution: Ground risk discussions in organizational context. Instead of abstract discussions of "potential bias," examine whether bias is likely in your specific applications. Prioritize managing real risks over theoretical ones.
Solution: Boards remain accountable even when delegating implementation to staff. Boards should ask probing questions, require evidence of governance, and maintain oversight. Delegation is not abdication.
If your board hasn't yet addressed AI governance, here are practical steps to begin:
The next lesson provides frameworks for identifying and assessing AI-related risks.
Start Lesson 4