Algorithmic Grantmaking: Opportunities and Risks

60 minutes | Video + Research Lab

Introduction: The Rise of Algorithmic Decision-Making

Algorithmic grantmaking—the use of automated systems to make or substantially inform funding decisions—represents one of the most consequential applications of AI in philanthropy. Unlike proposal screening (which filters volumes of applications) or portfolio analysis (which examines aggregate patterns), algorithmic grantmaking directly determines which organizations receive funding and which do not.

This lesson explores both the profound opportunities and significant risks inherent in algorithmic grantmaking. We'll examine what automation promises, what evidence suggests about algorithmic systems' performance, real-world case studies of both successes and failures, and frameworks for thinking about when algorithmic approaches are appropriate and when they are not.

Key Takeaway

Algorithmic grantmaking can increase efficiency and scale, but it introduces new risks around bias, equity, and mission authenticity. The central question isn't "Should we automate?" but rather "For what specific decisions, with what safeguards, and with what human discretion?"

Defining Algorithmic Grantmaking

Algorithmic grantmaking exists on a spectrum. On one end: a simple rule-based system might automatically reject proposals from organizations without 501(c)(3) status. On the other end: a sophisticated machine learning model might predict funding outcomes and recommend grant amounts. In between: systems that score proposals, rank them, or flag them for particular types of human attention.

The key distinction is whether the algorithm makes a final decision (or implements a decision through automation) or informs human decision-makers. Most current foundation implementations fall into the second category—AI informs, humans decide. But the field is moving toward greater automation for certain categories of decisions.

For this lesson, we define algorithmic grantmaking as any system in which computational processes materially influence funding allocation decisions, whether by making the decision directly, recommending decisions to humans, or automatically implementing human-decided policies.

Automation Opportunities: Why Foundations Pursue Algorithmic Approaches

Efficiency and Scale

The primary motivation for algorithmic grantmaking is efficiency. A foundation receiving 2,000 proposals cannot have every program officer read every proposal. Algorithmic screening reduces human review burden, allowing foundations to process more applications with existing staff. This enables foundations to expand outreach, accept proposals from new geographic areas, or fund more organizations overall.

Scale efficiency matters particularly for foundations that want to serve underresourced communities. A foundation might want to fund organizations in all 50 states, but lack staff in every region. Algorithmic processes can level the playing field by ensuring every proposal receives consistent, systematic initial evaluation regardless of geography.

Speed

Algorithms operate at machine speed. Where human review of a grant application requires minutes or hours, algorithmic assessment requires seconds. This enables faster feedback to applicants and faster decision-making overall. For time-sensitive funding (disaster relief, rapid health response), speed can be lifesaving.

Consistency

Human reviewers have good days and bad days. They're influenced by the order in which they read proposals, their mood, their hunger level. Algorithms, properly designed, apply consistent criteria every time. If your program seeks organizations serving underrepresented populations, an algorithm can apply that criterion consistently across all 2,000 proposals, whereas human reviewers might unconsciously relax the criterion as they become fatigued.

Transparency and Explainability

Done well, algorithmic systems can be more transparent than human judgment. A foundation can publish: "Our algorithm considers these factors, weights them in this way, and here's the code." Human judgment, by contrast, is often opaque. A program officer might unconsciously disfavor organizations led by people of color, or favor organizations geographically close to where they grew up. These biases are invisible and difficult to audit.

Apply This

If you're leading a nonprofit considering an algorithmic funding source, request full transparency: "What specific criteria does your algorithm consider? How are they weighted? What historical data trained the algorithm? How do you audit for bias?" Most responsible foundations will welcome these questions and provide detailed answers.

The Risk Landscape: Why Algorithmic Grantmaking Is Fraught

Bias Amplification and Perpetuation

Algorithmic systems are trained on historical data. If those historical data reflect past discrimination, the algorithm will learn and amplify that discrimination. This is not hypothetical—it's been documented extensively. For example, if a foundation has historically funded more organizations with white executive directors, an algorithm trained on that data will learn to prefer organizations with white leadership, even if the historical preference wasn't intentional.

This bias operates particularly subtly through proxy variables. An algorithm shouldn't directly consider race or ethnicity (and likely shouldn't be allowed to, legally). But it might consider variables like "organization age," "annual budget," or "geographic location"—variables that correlate with founder demographics and thus function as proxies for protected characteristics.

Equity Impacts and Systemic Effects

Individual algorithms might introduce modest bias. But when systemic patterns emerge—algorithms across multiple foundations favoring similar characteristics—those algorithmic decisions can reshape entire sectors of philanthropy. Organizations serving underrepresented populations might find it progressively harder to secure funding if multiple foundations' algorithms systematically disfavor them.

This is particularly concerning because algorithms operate at scale and with authority. When a human program officer has a bias, an applicant might appeal or request reconsideration. When an algorithm makes the same biased decision across 2,000 applications, no individual appeal mechanism exists. The decision feels inevitable and systematic rather than discretionary.

Mission Drift and Value Erosion

Philanthropy fundamentally involves human judgment about values. A program officer asks: "Does this organization embody the values we care about? Do we want to build a relationship with this leader?" These are human judgments that can't be reduced to data. When algorithms become the primary decision-makers, something essential about philanthropy's mission-driven nature can be lost.

Consider a foundation committed to equity and justice. That commitment is necessarily value-driven—it requires human judgment about what justice means in a particular context. An algorithm trained on historical data reflects historical assumptions about justice, not current values. There's genuine risk that algorithmic decision-making causes mission drift toward patterns that are computationally efficient but values-misaligned.

Accountability Gaps

When a foundation makes a grant through human judgment, someone is accountable. The program officer can explain the decision, learn from outcomes, and apply that learning next time. When an algorithm makes the decision, accountability diffuses. Did the algorithm fail? Did the data fail? Did the developers make the wrong choices? Was it the foundation's responsibility to oversee development or implementation?

These accountability gaps are particularly problematic when algorithms make poor decisions. A nonprofit denied funding based on algorithmic assessment might reasonably ask: "Why?" The response—"The algorithm determined you didn't meet our criteria"—feels unsatisfying because no specific human can explain the reasoning or consider special circumstances.

Automation Bias

Humans have a documented tendency to over-trust algorithmic systems. When an algorithm recommends something, people are more likely to follow the recommendation, even if it's obviously wrong. This "automation bias" means that once an algorithmic system is deployed, even if program officers retain theoretical override authority, they're likely to be heavily influenced by algorithmic recommendations and less likely to challenge them than they would human recommendations.

Warning

Automation bias is particularly dangerous in philanthropic settings. A program officer who sees an algorithmic recommendation might think, "Well, if the algorithm says so, there must be something I'm missing." This deference to algorithms can undermine the human judgment that philanthropy fundamentally requires. Guard against automation bias through active training and explicit organizational culture shifts.

Algorithmic Transparency and Explainability: The Path Forward

The most responsible approach to algorithmic grantmaking emphasizes transparency and explainability. Foundations committed to algorithmic approaches should ask:

What does the algorithm consider? Publish the specific factors the algorithm weights in making decisions. Be specific. "Organization strength" is too vague. "Leadership experience measured by years in role" is appropriately specific.

Why does it weight those factors? Explain the reasoning. Is it because historical data shows organizations with experienced leaders achieve better outcomes? If so, state that explicitly: "We weight leadership experience because our outcome data shows organizations with experienced leaders are more likely to achieve stated objectives." This transparency enables people to debate whether the reasoning is sound.

What data trained the algorithm? Describe the dataset. How many past grants? What time period? What organizations? Describing the training data enables people to consider whether the data reflect appropriate patterns or historical biases.

What safeguards prevent bias? Describe specific bias mitigation approaches. Did developers actively test for bias? How? What did they find? What adjustments did they make? Publishing this information demonstrates commitment to equity while enabling external scrutiny.

Audit Trails and Continuous Monitoring

Responsible algorithmic grantmaking requires audit trails documenting algorithmic decision-making over time. By analyzing patterns of who is funded and who isn't, foundations can identify whether algorithmic decisions are producing equitable outcomes across different organization types, geographies, and populations served.

The best approaches include quarterly or annual reports showing: distribution of funding by organization characteristics, algorithmic scores vs. actual outcomes (did organizations the algorithm predicted would succeed actually succeed?), and bias metrics showing whether different groups of applicants receive systematically different assessments.

Continuous monitoring isn't one-time verification—it's ongoing commitment. As circumstances change, new data arrives, and organizational understanding evolves, algorithms should be updated and re-audited. This requires organizational commitment to continuous learning and adaptation.

Case Study: When Algorithmic Grantmaking Worked Well

A regional foundation implemented an algorithmic screening system with explicit commitment to transparency and equity. They published: the exact factors the algorithm considered, the weights assigned, the training dataset, and their bias mitigation approaches. Every quarter, they published metrics showing that algorithmic decisions didn't systematically disadvantage organizations led by people of color or organizations in rural areas.

Importantly, they maintained strong human discretion. Program officers could override algorithmic recommendations for any reason. They tracked when overrides occurred and why. Over time, they learned that the algorithm was underweighting certain types of innovation that program officers recognized as promising. They adjusted the algorithm accordingly.

After two years, this foundation had successfully increased application volume by 40% while maintaining diverse grantee portfolios. The algorithmic system actually enabled more equitable outcomes because it prevented individual program officer biases from determining which proposals received attention.

Case Study: When Algorithmic Grantmaking Created Problems

A foundation implemented an algorithmic system trained on historical data without conducting bias audits. Within one year, they noticed that their funding had become increasingly concentrated in large, established organizations while funding for newer, smaller organizations—disproportionately led by people of color—declined. When they audited the algorithm, they discovered that the training data reflected the foundation's historical bias toward well-established organizations. The algorithm had simply learned and amplified this bias.

Worse, the algorithmic system had reduced human attention to novel applications. Program officers, influenced by automation bias, largely accepted algorithmic recommendations rather than deeply reviewing proposals the algorithm flagged as "weak fits." By the time the bias was discovered, meaningful damage had been done to the foundation's relationships with organizations in marginalized communities.

The foundation had to sunset the algorithmic system, rebuild relationships with the nonprofit community, and undergo significant organizational learning about the risks of algorithmic decision-making.

Comparing Algorithmic vs. Human-Centered Approaches

The choice between algorithmic and human-centered grantmaking isn't binary. The most sophisticated approaches use hybrid models: algorithms handle high-volume screening, humans make final decisions with algorithmic input. This preserves efficiency gains while protecting human judgment in actual funding decisions.

Human-centered approaches excel at considering context, relationships, learning, and values. They're flexible and adaptable. They're transparent in ways that algorithms struggle with. But they're slower, more resource-intensive, and susceptible to individual bias and fatigue.

Algorithmic approaches excel at speed, scale, and (potentially) consistency. They can process volumes of information impossible for humans. But they struggle with context, values, and equity. They're vulnerable to historical bias and difficult to appeal.

The question is not which is superior overall, but rather which is appropriate for particular decisions in particular contexts. High-stakes funding decisions—particularly those that might reshape organizations or communities—warrant human-centered approaches. High-volume screening—where the goal is identifying promising candidates for human review—might benefit from algorithmic approaches with strong safeguards.

Research Evidence on Algorithmic Bias in Grantmaking

Limited but growing research examines algorithmic bias in grantmaking specifically. Studies from related fields (hiring algorithms, lending algorithms) show that bias is endemic unless actively mitigated. A foundation implementing algorithmic grantmaking without bias mitigation should expect bias to emerge. Even with mitigation, bias is likely to persist in subtle forms.

The most robust approach involves recognizing that bias mitigation is ongoing work, not a problem to be solved once. Algorithms should be audited regularly, updated as new understanding emerges, and reformed if they produce inequitable outcomes. This requires organizational commitment and resources that not all foundations can provide.

Key Takeaway

Algorithmic grantmaking can be a powerful tool, but only when implemented with explicit commitment to equity, transparency, human discretion, and continuous learning. Without these elements, algorithmic approaches risk amplifying historical biases and undermining philanthropy's values-driven mission.

Conclusion: Responsible Algorithmic Grantmaking

The future of grantmaking will involve algorithms. But the quality of that future depends on whether foundations approach algorithmic systems responsibly. This means: transparency about what algorithms do and why, explicit bias mitigation and ongoing monitoring, preservation of human judgment in actual funding decisions, and genuine accountability when algorithms produce inequitable outcomes.

Grant professionals should approach algorithmic grantmaking with both openness and skepticism. Openness to the genuine efficiency and transparency gains algorithms can provide. Skepticism about claims that algorithms are objective or can replace human judgment. The best approach: ask questions, demand transparency, and hold foundations accountable for equitable outcomes.

Continue Your Learning

Ready to master AI in philanthropy? Enroll in the complete CAGP Level 5 course and earn your certification in advanced grant leadership.

Explore Full Course