Risk Assessment Frameworks

Identifying and Mitigating AI Risks

40-minute read

Introduction

Effective AI governance begins with understanding risks. Different AI applications carry different risks. Using ChatGPT to draft a blog post creates minimal risk. Using an automated hiring system that makes final hiring decisions creates substantial risk. Understanding your organization's risk landscape allows you to prioritize governance efforts, allocate resources appropriately, and create confidence among stakeholders that you're managing AI responsibly.

This lesson provides frameworks for systematically identifying AI risks, assessing their severity, and designing mitigation strategies. We'll explore different risk categories, a practical risk assessment matrix, guidance on determining organizational risk tolerance, and approaches to monitoring risks over time.

Categories of AI Risk

AI risks fall into several overlapping categories. Understanding these categories helps ensure you don't miss important risk dimensions when assessing your organization's AI use.

1. Bias and Discrimination Risk

AI systems can perpetuate or amplify historical biases, leading to outcomes that disproportionately affect members of protected classes. This risk is highest when AI informs decisions affecting individuals (hiring, program eligibility, resource allocation). Bias can result from training data that reflects historical discrimination, algorithmic design choices that ignore important context, or deployment practices that don't account for how bias manifests in specific communities.

2. Data Privacy and Security Risk

When organizations share sensitive data with AI systems—especially consumer-grade tools—they create privacy and security risks. Data transmitted to third-party servers can be breached, stored longer than necessary, or used for purposes beyond what organizations authorized. Data privacy risks are particularly acute with health information (HIPAA), education records (FERPA), and personally identifiable information about vulnerable populations.

3. Transparency and Accountability Risk

AI systems often function as "black boxes" where even their creators don't fully understand why they reach particular conclusions. When organizations use AI to make decisions affecting stakeholders without explaining how decisions were made, they create transparency and accountability risks. This undermines trust, particularly in contexts where stakeholders have rights to understand and contest decisions.

4. Accuracy and Reliability Risk

AI systems can be inaccurate or unreliable in ways that aren't immediately obvious. Systems trained on limited data might perform poorly for populations underrepresented in training data. Systems built on flawed assumptions might consistently produce incorrect outputs. If organizations rely on inaccurate AI outputs to make decisions, they risk making poor decisions that harm program effectiveness or mission alignment.

5. Intellectual Property and Copyright Risk

Generative AI systems train on copyrighted material and may reproduce substantial portions of it in outputs. Organizations using AI-generated content without understanding these risks may inadvertently use content that violates copyright or creator rights. This creates legal liability and potential reputational damage.

6. Reputational Risk

AI missteps can damage organizational reputation. Stories of organizations using biased AI systems, mishandling data, or deploying inappropriate AI applications spread rapidly through social media and can damage stakeholder relationships, funder trust, and community standing.

7. Compliance and Regulatory Risk

Emerging AI regulations and compliance requirements create risk for organizations that don't stay current. Executive orders, state laws, funder requirements, and federal regulations increasingly impose requirements for AI governance, transparency, bias testing, and disclosure. Non-compliance can result in fines, grant denials, or contract terminations.

8. Operational and Strategic Risk

Over-reliance on AI systems creates operational risk if systems fail. Strategic risk emerges when AI adoption distorts organizational priorities—for example, when efficiency optimization causes organizations to lose sight of effectiveness and equity. Organizational cultures that default to "AI said so" without human judgment create decision-making risks.

The Likelihood-Impact Risk Matrix

A useful framework for assessing and prioritizing risks combines two dimensions: likelihood (how probable is this risk for our organization?) and impact (how severe would the consequences be if this risk materialized?).

Likelihood Definition Examples
High Likely to occur in the near future Data breaches with cloud-based AI tools if you process sensitive data without security protocols; bias in hiring AI if you don't test for bias before deploying
Medium Might occur; reasonable possibility Funder requirements for AI disclosure if you don't have disclosure procedures; reputational damage if AI problems become public
Low Unlikely but possible Litigation over AI copyright infringement; regulatory fines for non-compliance with emerging AI laws
Impact Definition Examples
High Severe consequences; significant damage to organization or stakeholders Data breach exposing health information of thousands; algorithmic discrimination affecting program eligibility; major reputational damage affecting funding; legal liability resulting in damages
Medium Moderate consequences; noticeable but not catastrophic damage Grant denial due to inadequate AI governance; social media criticism about AI use; need to modify AI system due to problems discovered
Low Minor consequences; limited organizational impact Staff time spent correcting AI errors; need to revise an AI-generated document; minor efficiency loss if an AI system is unavailable temporarily
Using the Risk Matrix

Plot your AI applications on a 2x2 or 3x3 matrix using likelihood and impact dimensions. High-likelihood/high-impact risks demand immediate attention and strong mitigation. Medium likelihood or impact risks require attention and monitoring. Low-likelihood/low-impact risks can often be accepted without extensive mitigation.

High-Risk AI Activities

Certain AI applications inherently carry elevated risk and warrant more rigorous governance. Understanding which activities fall into this category helps you prioritize governance efforts:

Highest-Risk Activities (Require Rigorous Governance)

  • Decision-making about individuals: Hiring decisions, program eligibility, resource allocation, disciplinary actions
  • Processing sensitive data: Health information, financial data, education records, immigration status, criminal history
  • Serving vulnerable populations: AI used with children, homeless populations, justice-involved individuals, undocumented populations
  • Public-facing systems: Chatbots, recommendation systems, content moderation—visible to communities and subject to public scrutiny

Medium-Risk Activities (Require Monitoring)

  • Grant writing: AI-generated or AI-assisted proposals that are submitted to funders
  • Marketing and communications: AI-generated content used in public communications
  • Financial analysis: AI used to analyze funding patterns, identify trends, or inform financial decisions
  • Program evaluation: AI analyzing program outcomes or effectiveness

Lower-Risk Activities (Require Basic Governance)

  • Writing assistance: AI used to brainstorm, draft, edit internal documents
  • Data analysis and summarization: AI used to analyze public data or summarize non-sensitive documents
  • Administrative support: AI used for scheduling, note-taking, email organization

Mitigation Strategies

Once you've identified risks, the next step is developing mitigation strategies—approaches to reduce the likelihood or impact of identified risks.

Mitigation Strategies for Common Risks

For Bias and Discrimination Risk

Prevention: Test AI systems for bias before deployment. Use diverse data in training. Document assumptions built into systems. Apply equity frameworks to AI design.

Detection: Monitor AI outputs over time for disparate impact. Regularly audit decisions for patterns of bias. Gather feedback from affected communities.

Remediation: Adjust AI systems when bias is detected. Implement human review requirements to catch bias before decisions are finalized. Document all bias found and corrective actions taken.

For Data Privacy and Security Risk

Prevention: Use enterprise-grade AI tools with appropriate security certifications for sensitive data. De-identify or anonymize data before sharing with AI systems. Limit who can access sensitive data through AI tools.

Detection: Monitor for unauthorized data access. Maintain audit logs of data processed by AI systems. Implement data breach detection systems.

Remediation: Develop incident response procedures. Establish notification protocols if data is breached. Cooperate with regulatory authorities if required.

For Transparency and Accountability Risk

Prevention: Disclose AI use to stakeholders affected by AI decisions. Explain how and why you use AI. Establish processes to contest AI-based decisions.

Detection: Solicit feedback from stakeholders about transparency concerns. Monitor for complaints about unexplained AI decisions.

Remediation: Increase transparency when concerns emerge. Implement human review of decisions stakeholders contest. Adjust AI use if lack of transparency undermines trust.

For Accuracy and Reliability Risk

Prevention: Test AI systems for accuracy before deployment. Validate that systems work well for all populations you serve. Build in human review for high-stakes decisions.

Detection: Monitor AI accuracy over time. Track errors and near-misses. Gather feedback about AI performance from users and stakeholders.

Remediation: Retrain systems when accuracy deteriorates. Increase human oversight of decisions when accuracy concerns emerge. Consider retiring systems that can't achieve acceptable accuracy.

Determining Organizational Risk Tolerance

Different organizations have different risk tolerances—different levels of risk they're willing to accept. Risk tolerance depends on organizational values, mission, stakeholder expectations, and resources. Some organizations are risk-averse and avoid AI applications that carry any significant risk. Others are risk-tolerant and embrace AI applications if proper mitigations are in place.

Clarifying your organization's risk tolerance helps you make consistent decisions about AI use and communicate expectations to staff and stakeholders. Consider:

Example Risk Tolerance Statements

Conservative Risk Tolerance: We prioritize avoiding harm and protecting stakeholder trust above all else. We avoid AI applications involving high-risk decisions or sensitive data unless we have enterprise-grade security and have conducted thorough bias testing. We disclose all AI use to stakeholders and maintain human review of all AI-informed decisions.

Moderate Risk Tolerance: We embrace AI innovations where benefits clearly outweigh risks and we have appropriate mitigations in place. High-risk applications require board approval and regular oversight. Mitigations must be proportionate to identified risks.

Higher Risk Tolerance: We pursue AI innovations aggressively, provided we maintain governance oversight. We're willing to accept moderate risks if the potential benefits are significant and we can monitor and correct problems quickly.

Ongoing Risk Monitoring

Risk assessment isn't a one-time exercise. Organizations need processes to monitor risks continuously, identify new risks as they emerge, and adjust mitigations as circumstances change.

Ongoing Risk Monitoring Procedures

Next: Disclosure and Funder Transparency

Learn how to communicate with funders about your AI use and meet disclosure requirements.

Start Lesson 5