International Standards: ISO 42001, OECD AI Principles

55 minutes • Video + Research Lab

Introduction: AI Governance Beyond U.S. Borders

AI governance is inherently global. Artificial intelligence systems developed in the United States are deployed internationally. U.S. nonprofits increasingly operate across borders, partnering with international organizations and serving diaspora communities. International funders shape nonprofit expectations around AI governance. Companies and platforms operating globally must navigate divergent regulations across jurisdictions. Understanding international AI standards, principles, and governance approaches is essential for nonprofit leaders working in this increasingly borderless AI landscape.

This lesson explores major international AI standards and principles, examines how different jurisdictions approach AI governance, and discusses implications for nonprofit organizations operating globally or working with international partners. We'll examine ISO 42001, OECD AI Principles, UN Sustainable Development Goals connections to AI, UNESCO recommendations, the EU AI Act, APEC frameworks, and cross-border data governance issues.

ISO 42001: AI Management System Standard

ISO (International Organization for Standardization) develops technical and management standards adopted by organizations worldwide. ISO 42001, published in 2023, is the first international standard for AI management systems. The standard provides a framework for managing AI risks throughout the AI system lifecycle, from concept and planning through development, deployment, monitoring, and modification.

ISO 42001 covers: (1) Organizational governance and leadership commitment to responsible AI; (2) Risk management processes for identifying, assessing, and mitigating AI risks; (3) Performance monitoring and evaluation throughout the AI system lifecycle; (4) Stakeholder engagement and transparency practices; (5) Documentation and record-keeping for demonstrating compliance; (6) Corrective actions when issues are identified; (7) Periodic review and continual improvement.

Unlike prescriptive regulations that specify exactly what organizations must do, ISO standards provide flexibility, allowing organizations to tailor implementation to their specific context while meeting core management principles. For nonprofits, ISO 42001 provides a recognized international framework for demonstrating responsible AI governance to international funders, partners, and regulators. Many multinational organizations now reference ISO 42001 in their vendor requirements, meaning nonprofits implementing ISO 42001 have competitive advantage in partnerships with international organizations.

Implementing ISO 42001 involves: (1) Conducting AI risk assessment; (2) Developing AI governance policies aligned with organizational values; (3) Establishing roles and responsibilities for AI oversight; (4) Documenting AI systems and their risks; (5) Implementing monitoring and evaluation processes; (6) Training staff on AI governance requirements; (7) Regular internal audits and management reviews; (8) Demonstrating continuous improvement. Organizations can pursue formal ISO 42001 certification through third-party auditors, though many organizations implement the framework without pursuing formal certification.

Key Takeaway

ISO 42001 is the first international standard for AI management systems, providing a flexible framework for organizations to govern AI risk throughout the AI lifecycle. It's increasingly referenced in international partnerships and funder requirements, making it valuable for nonprofits engaged internationally.

OECD AI Principles and Recommendation

The Organization for Economic Cooperation and Development (OECD), comprising 38 member countries including the United States, develops policy recommendations that shape member countries' approaches to key issues. The OECD AI Principles (2019) and subsequent Recommendation on AI (2023) establish core values and governance principles that influence how member countries approach AI regulation.

The OECD AI Principles include: (1) AI should benefit people and the planet: AI systems should be designed and deployed to benefit people and contribute to sustainable development; (2) Human-centered values and fairness: AI should respect human rights, human autonomy, and rule of law; promote equality and non-discrimination; and support human agency and democratic institutions; (3) Transparency and explainability: AI systems should be transparent and explainable such that humans can understand and challenge AI decisions; (4) Robustness and security: AI systems should be robust and secure, designed to withstand adversarial attacks and perform reliably even under unexpected conditions; (5) Accountability: Organizations deploying AI should be accountable for AI system outcomes and able to demonstrate responsible practices.

The OECD Recommendation on AI extends these principles with governance recommendations for member countries: establishing national AI strategies; investing in AI research and skills development; engaging diverse stakeholders in AI policy; fostering interoperability and data sharing while protecting privacy; ensuring responsible business conduct; and addressing labor market impacts of AI.

For nonprofits, the OECD AI Principles provide internationally recognized language for articulating AI governance commitments. When nonprofits reference OECD principles in their AI policies, they signal alignment with internationally endorsed principles that OECD member countries have committed to supporting. This can be valuable when engaging with international funders or government agencies influenced by OECD recommendations.

UN Sustainable Development Goals and AI

The United Nations Sustainable Development Goals (SDGs)—17 global goals adopted by all UN member states in 2015—provide a universal call to action to end poverty, protect the planet, and ensure peace and prosperity. The SDGs have become the organizing framework for global development efforts and nonprofit work worldwide. Understanding how AI connects to SDG achievement is essential for nonprofits working on development, humanitarian, and social impact issues.

AI offers significant opportunities for accelerating progress toward SDGs: AI-powered diagnostics can improve health outcomes (SDG 3); personalized learning systems can expand educational access (SDG 4); AI can optimize agricultural productivity addressing food security (SDG 2); AI tools can improve resource efficiency supporting sustainable consumption (SDG 12); AI can help identify environmental degradation and predict climate impacts (SDG 13). However, AI also poses risks: biased AI systems can perpetuate inequality (SDG 10); AI-driven automation can displace workers (SDG 8); resource-intensive AI systems can increase environmental impact (SDG 13).

Nonprofits working on SDG-related issues should understand AI's role in their sector, both opportunities and risks. When deploying AI to support SDG progress, nonprofits should assess alignment with SDGs and ensure AI use doesn't undermine progress on other SDGs. For example, implementing AI to optimize program delivery (positive for multiple SDGs) should not sacrifice data privacy (negative for SDG 16: Peace, Justice and Strong Institutions).

UNESCO Recommendations on AI Ethics

UNESCO, the UN agency for education, culture, and science, published the Recommendation on the Ethics of Artificial Intelligence in 2021, adopted by all UNESCO member states. This is the first global normative instrument on AI ethics, providing recommendations for member states on how to govern AI to ensure it benefits humanity while minimizing harms.

UNESCO's AI ethics recommendation emphasizes: (1) Transparency and accountability in AI systems; (2) Inclusion and stakeholder participation in AI governance; (3) Protection of human rights and freedoms; (4) Cultural diversity recognition; (5) Environmental responsibility; (6) Education and digital literacy; (7) Equitable access to AI benefits; (8) Labor considerations and worker protection; (9) Protection of vulnerable populations.

For nonprofits, particularly those working on education, culture, or human rights, UNESCO's recommendation provides international guidance aligning with nonprofit missions. Many nonprofits find that UNESCO's emphasis on inclusion, equity, and protection of vulnerable populations resonates with their organizational values.

The EU AI Act and Extraterritorial Implications

The European Union's proposed AI Act represents the most comprehensive AI regulatory framework globally. When finalized, the AI Act will categorize AI systems by risk level and impose requirements on organizations deploying AI in the EU. The AI Act applies to organizations outside the EU that process EU residents' data or provide AI services to EU users, creating extraterritorial effects that influence how organizations globally approach AI governance.

The EU AI Act proposes a risk-based framework: (1) Prohibited AI: Certain AI applications (real-time facial recognition in public spaces, some manipulation and social scoring systems) are prohibited entirely; (2) High-Risk AI: AI systems with significant potential for harm (hiring systems, criminal justice applications, benefit eligibility systems) must undergo assessment, implement safeguards, and maintain documentation; (3) Limited-Risk AI: Certain AI systems must provide transparency to users; (4) Minimal-Risk AI: Other AI systems face minimal regulation.

For nonprofits serving EU residents or partnering with EU organizations, understanding the EU AI Act is increasingly important. EU-based nonprofits face direct regulatory requirements. Non-EU nonprofits serving EU residents through online platforms or data processing must ensure compliance with applicable requirements. Additionally, nonprofits working with technology vendors should inquire about vendor compliance with EU AI Act requirements, as this is increasingly a standard due diligence question.

Apply This

Review your nonprofit's current and planned AI use. For each AI system, assess: (1) Which international standards or principles might apply (ISO 42001, OECD principles, UNESCO recommendations, EU AI Act)? (2) Does your nonprofit operate internationally or serve international populations? (3) Which international funders or partners might reference these standards in their requirements? Identify one area where alignment with international standards could strengthen your nonprofit's AI governance or competitive positioning.

APEC Privacy Framework and Cross-Border Data Flows

The Asia-Pacific Economic Cooperation (APEC) forum includes 21 member economies representing 60% of global GDP. APEC developed the Cross-Border Privacy Rules Framework (CBPR), providing mechanisms for data transfers across borders while protecting privacy. For nonprofits operating in Asia-Pacific regions or partnering with organizations there, understanding APEC frameworks is important.

Unlike GDPR's strict limitations on international data transfers, APEC emphasizes accountability and transparency while allowing greater flexibility for international business operations. APEC member economies include both developed nations (U.S., Japan, Australia) and emerging markets (Vietnam, Thailand, Indonesia), creating diverse regulatory environments. Nonprofits operating across APEC economies should implement privacy protections aligned with the CBPR while accommodating different national regulatory requirements.

Harmonization Challenges and Divergent Approaches

International AI governance is characterized by significant divergence rather than harmonization. The EU takes a comprehensive regulatory approach, establishing rules applicable across member states. The United States emphasizes sectoral regulation with different agencies addressing AI in specific domains (healthcare, employment, consumer protection). China prioritizes AI development with governance targeted at specific risks (content moderation, surveillance, data security). Other countries adopt approaches between these poles.

This divergence creates challenges for multinational organizations: complying with multiple conflicting regulatory regimes is expensive and complex. It creates incentives for organizations to adopt highest-standard practices globally (if an organization must comply with strict EU requirements, adopting those standards globally is often more efficient than maintaining different practices in different jurisdictions). However, it can also lead to fragmentation where organizations tailor practices to different regions, potentially allowing practices prohibited in strict jurisdictions to continue elsewhere.

Implementing International Standards in Nonprofit Context

Nonprofits often have limited resources to navigate complex, divergent international requirements. Strategies for managing international standards implementation include: (1) Baseline Standards Approach: Implement practices aligned with strictest applicable standard (usually GDPR or EU AI Act requirements), extending those practices globally; (2) Sectoral Approach: Tailor implementation to specific regions where nonprofit operates; (3) Vendor Requirements: Require vendors and partners to maintain compliance with applicable international standards; (4) Professional Support: Engage international compliance consultants or legal advisors for guidance on complex requirements; (5) Collaborative Learning: Participate in nonprofit networks sharing experiences implementing international standards.

Global Nonprofit Networks and International Best Practices

International nonprofit networks increasingly serve as venues for sharing AI governance best practices. Networks like Transparency International, Human Rights Watch, Amnesty International, and Global Witness share approaches to addressing AI's risks. Online platforms and associations facilitate knowledge sharing about international standards and implementation approaches. Participating in these networks provides nonprofits with peer learning, access to resources, and opportunity to contribute their perspective to international dialogue.

Preparing for an Evolving International Landscape

International AI governance continues to evolve rapidly. New standards will be developed; regulatory requirements will change; international coordination mechanisms will emerge. Nonprofit leaders should adopt practices enabling them to adapt: (1) Monitoring: Stay informed about emerging international standards and regulatory developments affecting your sector; (2) Flexibility: Design AI governance and system implementation with flexibility to accommodate evolving requirements; (3) Community Engagement: Participate in networks and associations tracking international developments; (4) Stakeholder Consultation: Maintain dialogue with international partners and funders about their governance expectations; (5) Periodic Review: Regularly review and update AI governance policies to ensure alignment with current international standards and best practices.

Key Takeaways

Ready to Advance Your Knowledge?

Continue building your expertise in AI governance, standards, and nonprofit leadership with the CAGP Level 5 certification program.

Explore the Program