Identifying Research Gaps in AI + Grants

⏱️ 50 minutes | Video + Seminar

Introduction: What We Don't Yet Know

The intersection of artificial intelligence and philanthropy is a rapidly evolving landscape. Yet significant gaps remain in our understanding of how AI shapes grantmaking, nonprofit operations, and funding outcomes. Identifying these gaps is essential for anyone seeking to conduct meaningful research that advances the field. Research gaps aren't problems to avoid—they're opportunities to contribute knowledge that genuinely matters.

This lesson teaches you how to systematically identify what we don't yet know about AI and grants. Understanding research gaps helps you formulate research questions that fill real holes in the literature, address pressing practitioner concerns, and contribute to the evidence base guiding AI adoption in the nonprofit sector.

Mapping the Current Research Landscape

Before you can identify gaps, you must understand what research currently exists. Bibliometric analysis—the systematic study of publication patterns—reveals what topics dominate research, what's understudied, and how different fields approach AI ethics questions.

What the Literature Shows

Existing research on AI in nonprofits tends to cluster in several areas. Much scholarship focuses on algorithmic bias and fairness in criminal justice, employment, and lending—fields where algorithmic decision-making affects individuals' opportunities and liberty. However, the grants and philanthropy sector receives comparatively less attention. This itself is a gap: Why should criminal justice AI receive ten times more scholarly scrutiny than grant-allocation AI, when both affect life opportunities?

Published research emphasizes technical aspects of AI systems—how algorithms work, their accuracy, their mathematical properties. Less research addresses organizational and social dimensions: How do staff actually use AI systems? How do systems reshape organizational culture? What informal decisions persist alongside algorithms?

Scholarship typically examines AI systems implemented in well-resourced organizations with significant technical capacity. Far less research studies AI adoption in under-resourced nonprofits or addresses how smaller organizations can navigate AI decisions responsibly.

Types of Research Gaps

Research gaps exist at multiple levels, each representing opportunities for meaningful research.

Disciplinary Gaps

AI in grants research spans computer science, public policy, nonprofit management, philosophy, and social science. Yet these disciplines rarely speak to each other. A computer scientist studying algorithmic fairness may not engage with nonprofit management scholarship on how funding decisions actually work. A social ethicist examining values in funding may not understand technical constraints. Bridging these gaps—conducting work that genuinely integrates multiple disciplinary perspectives—represents important research.

Sectoral Gaps

Most AI research studies either the nonprofit sector broadly or the grants sector specifically, but rarely both together. Research on nonprofit operations in artificial intelligence exists, but rarely centered on grantmaking. Research on grantmaking decision-making exists, but mostly predates significant AI adoption. Bringing these together represents an important gap.

Geographic and Cultural Gaps

The majority of AI ethics research is conducted in the United States and Western Europe by researchers in well-funded institutions. Research about AI in nonprofit funding in Africa, South Asia, Southeast Asia, and Latin America is minimal. Yet nonprofits and grantmakers in these regions may face quite different AI opportunities and challenges. Does algorithmic bias manifest similarly across cultural contexts? How do local philanthropic traditions interact with AI systems? These remain open questions.

Population-Specific Gaps

Certain populations appear frequently in AI research: well-resourced organizations, organizations serving majority populations, organizations with robust data infrastructure. Underrepresented in research: small grassroots organizations, organizations led by people of color, organizations serving undocumented populations or incarcerated people, organizations in rural areas. These gaps mean we know less about how AI affects the organizations and communities most likely to experience algorithmic bias.

Temporal Gaps

Most AI research is cross-sectional—measuring one moment in time. We know relatively little about how AI systems and their impacts evolve over time. Longitudinal research tracking how nonprofits experience AI over years would illuminate whether initial benefits persist, how systems degrade, how organizations adapt. This research is expensive and difficult, which is precisely why it's a gap.

Identifying Gaps Through Systematic Analysis

Rather than relying on intuition about what's missing, systematic approaches help identify genuine gaps. Start by reading widely: survey recent literature on AI ethics, AI in nonprofits, algorithmic fairness, and philanthropic decision-making. As you read, ask: What questions do these papers answer? What questions do they not address? What populations do they study? Which are overlooked?

Create a spreadsheet documenting: paper title, author, publication year, discipline, methodologies used, geographic focus, populations studied, key findings, and important limitations. After reviewing 20-30 papers, patterns emerge. You might notice: "All studies of grant matching algorithms come from wealthy foundations. What about smaller community foundations?" or "Papers discuss algorithmic bias in hiring but not in grant reviewing. Why the difference?"

Identifying Understudied Topics

Understudied topics differ from unimportant ones. A topic may be unstudied because it's not yet widely recognized as important, because it affects marginalized groups whose concerns don't attract research funding, because it's difficult to study, or because it became important only recently.

In AI and grants, several topics remain notably understudied: How do grant applicants experience AI-assisted review? What are the long-term organizational effects of algorithmic decision support? How do small nonprofits evaluate AI vendors? How do nonprofits in the Global South experience and evaluate AI? What are the environmental impacts of AI systems used in grantmaking? Each of these represents a genuine research gap.

Identifying Understudied Populations

Beyond topics, populations are understudied. Who participates in AI and grants research? Typically: executives at large foundations and nonprofits, researchers at well-funded universities, consultants at established firms. Largely absent: grant applicants themselves, especially from organizations led by marginalized populations; nonprofit staff in small organizations; grantmakers in grassroots funding networks; people declined for grants who might offer critical perspective on algorithmic decisions.

Equity and Justice Gaps

Some research gaps matter more than others. Gaps affecting marginalized populations matter most. For instance: We know more about algorithmic bias in tech hiring than in nonprofit funding, yet nonprofits serving marginalized communities often depend on grants and are more likely to face algorithmic bias. We understand how to audit algorithms for racial bias in criminal justice but rarely conduct such audits in grantmaking. These gaps perpetuate inequality.

Justice-oriented research gaps focus on: How do AI systems affect funding equity? Do certain types of organizations—perhaps those led by people of color or serving marginalized communities—experience algorithmic disadvantage? What values should guide fair AI in philanthropy? How can communities most affected by AI decisions participate in evaluating them?

Intersectional Gaps

Intersectionality recognizes that people have multiple identities—race, gender, class, disability status, and others interact to shape lived experience. Most AI research examines bias in single dimensions: racial bias or gender bias or class bias, studied separately. Intersectional gaps in research mean we know little about how AI systems treat people with multiple marginalized identities, or how decisions about one identity dimension affect another.

For grants research, an intersectional gap might be: How are grant outcomes affected by the intersection of organizational focus (serving LGBTQ+ communities), organization size (small and underfunded), and leader demographics (people of color)? Do algorithms disadvantage organizations at multiple intersections of marginalization?

From Gaps to Research Agendas

Identifying gaps is valuable, but transforming them into concrete research agendas requires additional steps. A research gap becomes researchable when you: specify the precise question, identify why it matters, understand what existing knowledge is relevant, determine what evidence would answer the question, and assess whether you have resources to investigate.

Not every gap is appropriate for every researcher. A small nonprofit shouldn't attempt a five-year longitudinal study of organizational impacts of AI adoption—that's resource-intensive. But a nonprofit could survey its applicants about their experience with AI-assisted review, addressing a significant gap with feasible research.

Practitioner-Generated Research

Some of the most important research gaps will be filled not by academics but by practitioners—nonprofit staff, grantmakers, and nonprofit leaders investigating their own work. You have insider knowledge, access to data, and stakes in the outcomes. Research you conduct about how your AI systems affect your grantmaking addresses real gaps while serving your organization's learning.

Emerging Frontiers in AI + Grants Research

Beyond identifying current gaps, emerging areas represent research frontiers. Generative AI for grants is new enough that comprehensive research is limited. How will foundation program officers use AI to write grant guidelines? What are risks of using large language models to write grant reviews? How might AI change the relationship between foundations and nonprofits? These frontier questions matter increasingly but remain largely unanswered.

Similarly, research on community-based participatory approaches to AI evaluation in grantmaking is rare. How do you involve grantees and applicants in evaluating grant AI systems? What does AI justice look like in philanthropy? How can AI serve the least powerful in philanthropic relationships?

Key Takeaway

Research gaps exist where important questions remain unanswered. Identifying gaps requires understanding the current research landscape, recognizing that gaps often involve understudied populations and topics, and paying particular attention to gaps affecting equity and justice. The most meaningful research addresses gaps that both advance knowledge and serve communities most impacted by AI.

Apply This

Identify one research gap relevant to your work with AI in grants. Spend 30 minutes reviewing recent literature on your topic. Document: What questions do existing papers answer? What populations are studied? What perspectives are missing? What would research addressing this gap contribute?

The Seminar: Research Gap Identification Workshop

This lesson's seminar brings together practitioners and researchers to identify priority research gaps in AI and grants. Working in small groups, you'll map the current state of knowledge about a specific topic, identify what's missing, and prioritize the most important gaps to address. Through discussion, you'll recognize how different perspectives—academic, practitioner, funder, community—identify different gaps as most urgent.

Conclusion: Your Research Can Fill Gaps

Understanding research gaps transforms how you approach your own work. Rather than wondering if your research question is important enough, systematic gap identification reveals that significant questions remain unanswered. Whether you're a nonprofit evaluating how your AI systems affect grantees, a foundation studying the effects of algorithmic matching on diversity of grantees, or an independent researcher investigating AI ethics in philanthropy, you're likely addressing genuine gaps. That work matters.

Build Your Research Foundation

Contribute to the evidence base shaping AI in philanthropy.

Explore Full Course