Identifying and Mitigating AI Risks
Effective AI governance begins with understanding risks. Different AI applications carry different risks. Using ChatGPT to draft a blog post creates minimal risk. Using an automated hiring system that makes final hiring decisions creates substantial risk. Understanding your organization's risk landscape allows you to prioritize governance efforts, allocate resources appropriately, and create confidence among stakeholders that you're managing AI responsibly.
This lesson provides frameworks for systematically identifying AI risks, assessing their severity, and designing mitigation strategies. We'll explore different risk categories, a practical risk assessment matrix, guidance on determining organizational risk tolerance, and approaches to monitoring risks over time.
AI risks fall into several overlapping categories. Understanding these categories helps ensure you don't miss important risk dimensions when assessing your organization's AI use.
AI systems can perpetuate or amplify historical biases, leading to outcomes that disproportionately affect members of protected classes. This risk is highest when AI informs decisions affecting individuals (hiring, program eligibility, resource allocation). Bias can result from training data that reflects historical discrimination, algorithmic design choices that ignore important context, or deployment practices that don't account for how bias manifests in specific communities.
When organizations share sensitive data with AI systems—especially consumer-grade tools—they create privacy and security risks. Data transmitted to third-party servers can be breached, stored longer than necessary, or used for purposes beyond what organizations authorized. Data privacy risks are particularly acute with health information (HIPAA), education records (FERPA), and personally identifiable information about vulnerable populations.
AI systems often function as "black boxes" where even their creators don't fully understand why they reach particular conclusions. When organizations use AI to make decisions affecting stakeholders without explaining how decisions were made, they create transparency and accountability risks. This undermines trust, particularly in contexts where stakeholders have rights to understand and contest decisions.
AI systems can be inaccurate or unreliable in ways that aren't immediately obvious. Systems trained on limited data might perform poorly for populations underrepresented in training data. Systems built on flawed assumptions might consistently produce incorrect outputs. If organizations rely on inaccurate AI outputs to make decisions, they risk making poor decisions that harm program effectiveness or mission alignment.
Generative AI systems train on copyrighted material and may reproduce substantial portions of it in outputs. Organizations using AI-generated content without understanding these risks may inadvertently use content that violates copyright or creator rights. This creates legal liability and potential reputational damage.
AI missteps can damage organizational reputation. Stories of organizations using biased AI systems, mishandling data, or deploying inappropriate AI applications spread rapidly through social media and can damage stakeholder relationships, funder trust, and community standing.
Emerging AI regulations and compliance requirements create risk for organizations that don't stay current. Executive orders, state laws, funder requirements, and federal regulations increasingly impose requirements for AI governance, transparency, bias testing, and disclosure. Non-compliance can result in fines, grant denials, or contract terminations.
Over-reliance on AI systems creates operational risk if systems fail. Strategic risk emerges when AI adoption distorts organizational priorities—for example, when efficiency optimization causes organizations to lose sight of effectiveness and equity. Organizational cultures that default to "AI said so" without human judgment create decision-making risks.
A useful framework for assessing and prioritizing risks combines two dimensions: likelihood (how probable is this risk for our organization?) and impact (how severe would the consequences be if this risk materialized?).
| Likelihood | Definition | Examples |
|---|---|---|
| High | Likely to occur in the near future | Data breaches with cloud-based AI tools if you process sensitive data without security protocols; bias in hiring AI if you don't test for bias before deploying |
| Medium | Might occur; reasonable possibility | Funder requirements for AI disclosure if you don't have disclosure procedures; reputational damage if AI problems become public |
| Low | Unlikely but possible | Litigation over AI copyright infringement; regulatory fines for non-compliance with emerging AI laws |
| Impact | Definition | Examples |
|---|---|---|
| High | Severe consequences; significant damage to organization or stakeholders | Data breach exposing health information of thousands; algorithmic discrimination affecting program eligibility; major reputational damage affecting funding; legal liability resulting in damages |
| Medium | Moderate consequences; noticeable but not catastrophic damage | Grant denial due to inadequate AI governance; social media criticism about AI use; need to modify AI system due to problems discovered |
| Low | Minor consequences; limited organizational impact | Staff time spent correcting AI errors; need to revise an AI-generated document; minor efficiency loss if an AI system is unavailable temporarily |
Plot your AI applications on a 2x2 or 3x3 matrix using likelihood and impact dimensions. High-likelihood/high-impact risks demand immediate attention and strong mitigation. Medium likelihood or impact risks require attention and monitoring. Low-likelihood/low-impact risks can often be accepted without extensive mitigation.
Certain AI applications inherently carry elevated risk and warrant more rigorous governance. Understanding which activities fall into this category helps you prioritize governance efforts:
Once you've identified risks, the next step is developing mitigation strategies—approaches to reduce the likelihood or impact of identified risks.
Prevention: Test AI systems for bias before deployment. Use diverse data in training. Document assumptions built into systems. Apply equity frameworks to AI design.
Detection: Monitor AI outputs over time for disparate impact. Regularly audit decisions for patterns of bias. Gather feedback from affected communities.
Remediation: Adjust AI systems when bias is detected. Implement human review requirements to catch bias before decisions are finalized. Document all bias found and corrective actions taken.
Prevention: Use enterprise-grade AI tools with appropriate security certifications for sensitive data. De-identify or anonymize data before sharing with AI systems. Limit who can access sensitive data through AI tools.
Detection: Monitor for unauthorized data access. Maintain audit logs of data processed by AI systems. Implement data breach detection systems.
Remediation: Develop incident response procedures. Establish notification protocols if data is breached. Cooperate with regulatory authorities if required.
Prevention: Disclose AI use to stakeholders affected by AI decisions. Explain how and why you use AI. Establish processes to contest AI-based decisions.
Detection: Solicit feedback from stakeholders about transparency concerns. Monitor for complaints about unexplained AI decisions.
Remediation: Increase transparency when concerns emerge. Implement human review of decisions stakeholders contest. Adjust AI use if lack of transparency undermines trust.
Prevention: Test AI systems for accuracy before deployment. Validate that systems work well for all populations you serve. Build in human review for high-stakes decisions.
Detection: Monitor AI accuracy over time. Track errors and near-misses. Gather feedback about AI performance from users and stakeholders.
Remediation: Retrain systems when accuracy deteriorates. Increase human oversight of decisions when accuracy concerns emerge. Consider retiring systems that can't achieve acceptable accuracy.
Different organizations have different risk tolerances—different levels of risk they're willing to accept. Risk tolerance depends on organizational values, mission, stakeholder expectations, and resources. Some organizations are risk-averse and avoid AI applications that carry any significant risk. Others are risk-tolerant and embrace AI applications if proper mitigations are in place.
Clarifying your organization's risk tolerance helps you make consistent decisions about AI use and communicate expectations to staff and stakeholders. Consider:
Conservative Risk Tolerance: We prioritize avoiding harm and protecting stakeholder trust above all else. We avoid AI applications involving high-risk decisions or sensitive data unless we have enterprise-grade security and have conducted thorough bias testing. We disclose all AI use to stakeholders and maintain human review of all AI-informed decisions.
Moderate Risk Tolerance: We embrace AI innovations where benefits clearly outweigh risks and we have appropriate mitigations in place. High-risk applications require board approval and regular oversight. Mitigations must be proportionate to identified risks.
Higher Risk Tolerance: We pursue AI innovations aggressively, provided we maintain governance oversight. We're willing to accept moderate risks if the potential benefits are significant and we can monitor and correct problems quickly.
Risk assessment isn't a one-time exercise. Organizations need processes to monitor risks continuously, identify new risks as they emerge, and adjust mitigations as circumstances change.
Learn how to communicate with funders about your AI use and meet disclosure requirements.
Start Lesson 5