U.S. State-Level AI Regulations

55 minutes | Video + Lab

The Fragmented U.S. Regulatory Landscape

Unlike the European Union with its comprehensive AI Act, the United States has adopted a patchwork approach to AI regulation, with regulations emerging primarily at the state level rather than federally. For nonprofits operating across multiple states, this fragmented landscape creates significant compliance complexity. A nonprofit program operating in five states must navigate five different regulatory frameworks, each with different requirements and timelines.

Understanding state-level regulations is essential for U.S.-based nonprofits, particularly those operating in technology-forward states like California, Colorado, and New York that have pioneered AI regulation. Additionally, as more states adopt AI regulations, the compliance landscape continues to evolve. Nonprofits must stay informed about regulatory changes that could affect their operations.

The lack of federal AI regulation has created uncertainty for organizations, but also opportunity. Nonprofits that develop compliance practices aligned with the most stringent state requirements can operate confidently across jurisdictions. Additionally, some nonprofits are finding that state regulation creates space for nonprofit advocacy and participation in regulatory development.

Key Takeaway

The U.S. lacks comprehensive federal AI regulation, instead relying on state-level laws. For multi-state nonprofits, compliance strategy involves understanding applicable state regulations, identifying the most stringent requirements, and implementing those across all operations to ensure consistent compliance.

Major State AI Regulations

Colorado AI Transparency Act

Colorado's AI Transparency Act, effective in 2024, requires organizations using high-impact AI systems to provide transparency disclosures to affected individuals. The law applies to "high-impact AI systems"—those that could reasonably result in significant legal, economic, or consequential effects on affected individuals.

For nonprofits, the law affects systems used in employment decisions, benefit eligibility determination, and resource allocation. The key requirement is transparency: organizations must disclose that an individual is subject to a high-impact AI decision, describe the decision category, and explain how individuals can request human review.

California Privacy Laws (SB 942 & SB 1001)

California's approach combines privacy law with AI-specific provisions. SB 942 (California Online Privacy Protection Act - CalOPPA) expanded to include provisions requiring transparency about automated decision systems. SB 1001, the California Consumer Rights Act (CCRA), creates a "Consumer Right to Know" about automated decision systems and mechanisms for consumers to request human review.

The requirements include disclosing that automated decision systems are being used, describing how the system works, providing the opportunity for human review, and correcting inaccurate information. For nonprofits operating in California or serving California residents, these requirements apply to any AI systems affecting individuals.

Illinois Biometric Information Privacy Act (BIPA)

Illinois BIPA, enacted in 2008, predates most AI regulation but significantly affects nonprofits using facial recognition or biometric AI. The law requires nonprofits to obtain informed written consent before collecting biometric information and to provide transparent policies about how biometric data will be used.

For nonprofits, BIPA compliance is critical if any systems analyze facial images, fingerprints, or other biometric identifiers. The law creates strict liability, meaning organizations that violate BIPA can face damages of $1,000-5,000 per violation. Many nonprofits have not realized their BIPA obligations when using commercial AI tools that process images.

New York Algorithmic Accountability Act

New York's Algorithmic Accountability Act, taking effect in 2025, requires public entities, contractors, and some private entities using algorithms in "consequential decisions" to conduct impact assessments and provide transparency documentation.

The law defines consequential decisions as determinations about eligibility for government benefits, housing, credit, education, employment, or other significant services. For nonprofits providing publicly-funded services or working under contracts to government agencies, this law applies. The requirement includes documenting the algorithm's capabilities, limitations, and potential impacts on protected populations.

Connecticut AI Transparency and Accountability Law

Connecticut requires algorithmic transparency in employment decisions. The law applies to any "consumer" determination related to employment—meaning nonprofits using AI in hiring, promotion, or termination must provide transparency. The law also affects vendors, requiring AI system vendors to document their systems and certify compliance.

For nonprofits headquartered or operating in Connecticut, the law requires documenting employment algorithms, testing for disparate impact, and providing transparency to employees and job candidates about algorithmic use.

Emerging State Regulations

Beyond the major regulatory efforts above, numerous states are developing AI regulations. Montana, Utah, and other states have enacted or are considering transparency requirements. Some states are exploring regulations specific to AI in education, criminal justice, or healthcare. The regulatory landscape is rapidly evolving, with new regulations anticipated in many states through 2025-2026.

For nonprofits, tracking emerging regulations is essential. Many states' regulatory efforts follow templates or draw inspiration from leading states like California and Colorado, suggesting that understanding major state regulations provides insight into likely future requirements.

Common Themes Across State Regulations

Despite fragmentation, certain themes appear repeatedly across state regulations:

Apply This

Identify all states where your nonprofit operates (where you serve beneficiaries, have employees, or conduct programs). For each state, research current AI regulations. Create a compliance matrix documenting: (1) applicable regulations; (2) AI systems affected; (3) specific compliance requirements; (4) implementation status. This compliance mapping will reveal overlapping requirements and help prioritize implementation. For multi-state nonprofits, identify the most stringent requirements across all states and consider implementing those globally for consistency and efficiency.

Compliance Strategies for Multi-State Nonprofits

Multi-state nonprofits face the challenge of navigating multiple regulatory frameworks simultaneously. Several strategies help manage this complexity:

Most-Stringent-Standard Approach

Many organizations adopt the most-stringent requirement across all states as their global standard. For example, if California requires transparency about automated decision systems, a nonprofit operating in California and ten other states might implement California's transparency standard across all operations. This approach simplifies compliance, reduces confusion for beneficiaries and staff, and provides consistent protection.

Jurisdictional Analysis

Other organizations conduct detailed jurisdictional analysis, understanding exactly where they have compliance obligations and implementing requirements specific to each jurisdiction. This approach is more complex but may be more cost-effective if requirements differ significantly. For example, a nonprofit might implement Connecticut's employment algorithm requirements only for Connecticut-based hiring while implementing different transparency standards in other states.

Regulatory Tracking

Nonprofits should establish procedures for tracking regulatory developments. Subscribing to regulatory alert services, maintaining relationships with legal counsel familiar with AI regulation, and participating in nonprofit/funder networks sharing regulatory intelligence helps organizations stay informed. Many nonprofits assign one staff member responsibility for regulatory monitoring and reporting to governance committees quarterly.

Implementation Challenges for Nonprofits

Nonprofits face particular challenges in implementing state AI regulations. Resource constraints make comprehensive compliance programs difficult for smaller organizations. However, the alternative—non-compliance—carries significant risk, including potential legal liability, loss of nonprofit status, and damage to organizational reputation.

Practical approaches that nonprofits find effective include: (1) prioritizing high-impact, high-risk systems for intensive compliance; (2) using nonprofit legal networks and consultant relationships to understand requirements; (3) building compliance requirements into AI vendor contracts, requiring vendors to warrant compliance; (4) involving beneficiary and community advisory groups in compliance processes; and (5) phasing implementation over time, prioritizing the most pressing requirements.

Warning

Small nonprofits sometimes assume that state AI regulations apply primarily to large tech companies and aren't relevant to them. In reality, many state regulations apply to nonprofits directly, particularly those using AI in employment, eligibility determination, or decisions affecting vulnerable populations. Ignoring state AI regulations exposes nonprofits to legal liability and regulatory penalties.

Federal Guidance and Context

While the U.S. lacks comprehensive federal AI regulation, federal guidance from agencies like the Office of Management and Budget (OMB) and the Equal Employment Opportunity Commission (EEOC) provides important context. OMB Memorandum M-24-10 requires federal agencies to manage risks from AI, with expectations for federal contractors and grant recipients. The EEOC has issued guidance stating that organizations face potential discrimination liability if AI systems have disparate impacts on protected groups.

For nonprofits receiving federal grants or contracts, these federal guidance documents effectively impose requirements beyond formal regulation. Federal funders increasingly require demonstration of responsible AI governance, making federal guidance practically important even for nonprofits not directly subject to federal regulation.

Conclusion

The U.S. state-level patchwork of AI regulation creates compliance complexity but also opportunity for nonprofits to develop robust AI governance. By understanding applicable state regulations, implementing most-stringent requirements across operations, and establishing procedures for ongoing regulatory monitoring, nonprofits can achieve compliance while advancing their missions responsibly. The regulatory landscape will continue evolving, requiring nonprofit leaders to maintain awareness and adaptability.

Key Learning Objectives

Ready to Master AI Governance?

Join hundreds of nonprofit leaders completing the CAGP Level 4 certification in AI governance and strategy.

Enroll Now