Introduction
One reason many nonprofits lack AI governance is that developing comprehensive policies feels overwhelming. Where do you start? What should policies include? How do you ensure they're appropriate for your organization's size and context? This lesson provides policy templates and customization guidance for different organizational sizes, along with practical processes for stakeholder input, implementation, and ongoing maintenance.
Templates by Organization Size
Policy needs and complexity vary significantly by organizational size. A small nonprofit with 5 staff members needs a different policy approach than a mid-sized nonprofit with 50 staff or a large nonprofit with hundreds of employees. The following templates are scaled to different contexts.
Micro Organization (1-5 staff)
Focus on simplicity and clarity. Policies can be concise and incorporated into existing staff handbooks. Key elements:
- Simple permitted/prohibited use list: One page specifying what AI tools are OK to use and what's not allowed
- Data protection rules: Clear instruction not to share sensitive data with consumer AI tools
- Tools inventory: Simple list of approved tools and their approved uses
- Leadership accountability: Executive director responsible for oversight; annual board discussion of AI use
- Staff training: Brief annual training on policies and safe AI use
Small Organization (6-25 staff)
Balance simplicity with adequate governance. Develop a standalone AI governance policy with these elements:
- Comprehensive policy document: 3-5 page document covering scope, principles, permitted/prohibited uses, tool approval process, data handling, and monitoring
- Tool approval process: Lightweight approval process (e.g., ED approves common tools; staff request approval for unusual tools)
- Data sensitivity classification: Clear guidance on which data can be used with which tools
- Board oversight: Annual AI governance discussion with board; designated staff member (often ED) responsible for oversight
- Staff training: Annual training on policies; department-specific guidance for tool users
- Monitoring: Annual audit of AI use and compliance; incident reporting procedures
Mid-Sized Organization (26-100 staff)
Develop formal, comprehensive policies with more sophisticated governance structure:
- Comprehensive policy framework: 5-10 page policy document plus supporting procedures and tools lists
- Designated AI governance lead: CFO, CIO, or dedicated governance role responsible for day-to-day oversight
- Tool approval process: Tiered approval (low-risk tools approved by IT, high-risk tools require governance committee review)
- Risk assessment process: Structured risk assessment for any AI use affecting individuals or processing sensitive data
- Board committee oversight: Designated governance or technology committee provides quarterly oversight
- Comprehensive training: All-staff awareness training plus role-specific training for tool users and approvers
- Documented monitoring: Quarterly risk assessments, annual comprehensive audits, incident investigation procedures
Large Organization (100+ staff)
Implement comprehensive, sophisticated governance with dedicated resources:
- AI governance program: Dedicated AI governance team or department
- Comprehensive policy framework: Detailed policies with supporting procedures, tool approval criteria, risk assessment methodologies, incident response procedures
- Chief AI Officer or equivalent: Executive responsible for organization-wide AI governance
- Sophisticated tool approval process: Multi-level review based on risk; integration with procurement and security teams
- Risk management program: Systematic identification, assessment, and monitoring of AI risks; regular risk reviews
- Board AI committee: Dedicated committee providing oversight; quarterly reports to full board
- Comprehensive training program: Tiered training by role; leadership development on AI governance; specialized training for tool users
- Ongoing monitoring and audit: Continuous monitoring systems; quarterly compliance assessments; annual independent audits
Customization Checklist
As you develop or adapt policies for your organization, use this checklist to ensure you've addressed key customization questions:
Policy Development Checklist
- Have we clearly defined which AI applications and tools fall within policy scope?
- Do our policies reflect organizational values and mission?
- Have we clearly listed uses we prohibit, restrict, and permit?
- Is our tool approval process proportionate to organizational size and complexity?
- Have we clearly defined who has responsibility for AI governance?
- Do our data handling requirements align with our actual data flows and compliance obligations?
- Have we specified what documentation we'll maintain?
- Is our disclosure language appropriate for our funder base?
- Have we specified training requirements and who receives training?
- Do we have clear monitoring and compliance procedures?
- Have we defined a policy review cycle (e.g., annual review)?
- Have we involved key stakeholders (leadership, board, staff) in policy development?
Stakeholder Input and Buy-In
Policies developed without stakeholder input often fail because staff don't understand them or don't support them. Building stakeholder input into policy development increases quality and adoption.
Stakeholder Input Process
- Board input: Present AI governance concept to board. Discuss concerns and values. Get buy-in before developing detailed policies.
- Leadership input: Meet with department heads and key leaders. Understand where AI is currently used, what risks they perceive, and what policies would work for their departments.
- Staff input: Survey or conduct focus groups with staff who use AI. What tools are helpful? What concerns do they have? What guidance would they find useful?
- Community input (if applicable): For organizations that serve specific communities, consider whether seeking community input on AI use aligns with values. Transparency about AI governance can strengthen community trust.
- Funder input (if applicable): If key funders have expressed concerns about AI governance, ensure policies address those concerns.
Change Management and Implementation
Developing strong policies is only the first step. Successful implementation requires thoughtful change management. Many organizations draft policies then struggle with adoption because they didn't manage the transition effectively.
Implementation Best Practices
Phased Implementation Approach
Phase 1 (Month 1): Announce AI governance initiative. Share why it matters. Provide high-level overview of policy approach. Address initial staff concerns.
Phase 2 (Months 1-2): Provide training. Conduct all-staff awareness training. Provide role-specific training for tool users. Make policies accessible and easy to reference.
Phase 3 (Months 2-3): Implement tool approval and monitoring. Begin approving tools under the new process. Start monitoring compliance. Provide support and guidance.
Phase 4 (Months 3+): Monitor and adjust. Track compliance. Listen to feedback. Make refinements to policies based on practical experience. Communicate successes and learnings.
Ongoing: Maintain momentum. Regular reminders about policies. Periodic refresher training. Celebrate instances where good governance prevented problems.
Making Policies Accessible
Policies that are difficult to access or understand won't be followed. Make your policies accessible:
- Plain language: Write in clear, accessible language. Avoid jargon. Use examples staff can understand.
- Multiple formats: Provide policies in written format, as a brief summary, as a flowchart for decision-making, as FAQs, etc. Different people learn differently.
- Prominent placement: Include in staff handbook, make available on intranet, share in onboarding materials.
- Decision support: Make it easy for staff to determine whether a specific use is permitted. Provide decision trees or contact information for questions.
- Regular reminders: AI governance shouldn't be a one-time communication. Regular reminders (monthly or quarterly) keep policies top-of-mind.
Policy Maintenance and Updates
Policies need ongoing maintenance and periodic updating as technology evolves, organizational circumstances change, and lessons are learned.
Annual Policy Review Process
Annual Review Checklist
What has changed? New AI tools? New funder requirements? New regulations? New organizational applications? Updates to existing tools?
What have we learned? Incidents or near-misses? Compliance gaps? Staff feedback on what's working and what's not?
What needs refinement? Are policies clear? Are they working as intended? Do they need to be more restrictive or more permissive? Are there gaps?
What's the update plan? Document specific changes needed. Communicate changes to staff. Update training materials. Communicate updated policies.
Accountability and Compliance Tools
Beyond written policies, organizations benefit from tools that support accountability and compliance monitoring:
Suggested Accountability Tools
- AI Tool Inventory: Spreadsheet listing all approved tools, approved uses, who's authorized to use them, and review dates
- Approval Log: Record of all tool approval decisions, who approved them, when, and the basis for approval
- Incident Log: Record of any incidents, near-misses, or concerns raised regarding AI use
- Training Records: Documentation showing which staff have received training and when
- Risk Assessment Register: For organizations assessing AI risks formally, a register documenting assessed risks, their status, mitigations, and monitoring
- Data Processing Agreement Tracking: If using enterprise AI tools with sensitive data, tracking and maintaining data processing agreements
Sample Policy Structure
Here's a suggested structure for an organizational AI governance policy:
Model AI Governance Policy Structure
1. Purpose and Authority - Why the policy exists and who has approved it
2. Scope - What systems and applications the policy covers
3. Organizational Principles - Values guiding AI use
4. Permitted Uses - AI applications that advance mission with acceptable risk
5. Restricted Uses - Applications requiring approval and oversight
6. Prohibited Uses - Uses not permitted under any circumstances
7. Tool Approval Process - How organizations evaluate and approve new tools
8. Roles and Responsibilities - Who is responsible for what aspects of governance
9. Data Handling - How different data types are handled with AI tools
10. Transparency and Disclosure - How the organization communicates about AI use
11. Training and Capability - Training requirements by role
12. Monitoring and Audit - How compliance is monitored and assessed
13. Incident Response - Procedures if something goes wrong
14. Policy Review and Updates - When and how the policy is reviewed
Moving Forward
Effective AI governance doesn't require perfection. It requires intentionality, stakeholder engagement, and commitment to continuous improvement. Start with appropriate policies for your organization's size and context. Implement with clear communication and training. Monitor, learn, and refine. Build a culture where staff understand that AI governance protects the organization, enables responsible innovation, and serves mission.
The remaining chapters will help you implement AI governance in practice—assessing readiness, implementing strategically, managing change, and ensuring equity in AI-assisted work. The foundations you've built in this chapter support all of that practical work.
Ready for Strategic Implementation?
Move to Chapter 12 to learn how to integrate AI governance into your organization's strategic planning and implementation.
Start Chapter 12