Every research project stands on foundations built by earlier work. Literature reviews are systematic processes for discovering, evaluating, and synthesizing what others have learned. Whether you're writing a journal article, developing organizational policy, conducting a grant proposal evaluation, or simply trying to understand the current state of knowledge about AI ethics in nonprofits, literature review skills are essential.
A good literature review is more than a list of what others have done. It's a thoughtful synthesis that shows how previous research relates to your questions, identifies patterns and tensions in existing knowledge, and establishes why your research matters. This lesson teaches you approaches to conducting literature reviews and synthesizing findings effectively.
Different research questions and purposes call for different review approaches. Understanding which type suits your needs ensures you invest effort efficiently.
Narrative reviews provide an overview of a topic based on the reviewer's expert judgment about which sources are most important and how they fit together. These reviews are excellent for exploring topics broadly, understanding how different perspectives relate, and providing context for why a research question matters. However, they can reflect the reviewer's biases about what's important.
A narrative review of AI ethics in grantmaking might explore philosophical foundations of justice in funding, describe how AI systems currently work, discuss ethical concerns raised in various disciplines, and conclude with implications for the sector. It's flexible and accessible but less systematic than other approaches.
Systematic reviews follow explicit, reproducible procedures for finding, evaluating, and synthesizing evidence. You specify search strategies in advance, define clear inclusion and exclusion criteria, systematically appraise the quality of studies, and explicitly describe how you synthesized findings. This rigor comes at a cost: systematic reviews are time-intensive and can be inflexible if your searches reveal unexpected needs.
A systematic review of algorithmic fairness interventions in grant allocation would: specify search terms and databases, document which papers met inclusion criteria and why others were excluded, rate the methodological quality of included studies, and systematically compare what different studies found. The result is high-confidence knowledge about the current evidence base.
Scoping reviews map the landscape of research on a topic, identifying what research exists, what populations and contexts are studied, and what gaps remain. They're designed to answer questions like "What is known about X?" or "What research exists on Y, and what is its scope?" Scoping reviews are more flexible than systematic reviews but still systematic in their approach.
A scoping review of AI in nonprofit funding would map: What research exists on AI in philanthropy? What methods do studies use? What populations are studied? What geographic contexts? What findings emerge? The result is understanding of the research landscape rather than synthesis of evidence on a specific question.
Meta-analysis statistically combines findings from multiple quantitative studies, producing pooled estimates of effects. Meta-analyses are powerful for questions like "How large is the effect of algorithmic bias on nonprofit grant awards?" or "How much do staff perceptions of AI fairness vary across organization types?" However, meta-analysis only works when studies are comparable and report appropriate statistics. Many qualitative or mixed-methods research can't be meta-analyzed.
Good literature reviews start with good search strategies. You must locate the relevant sources your topic requires while remaining efficient. This requires knowledge of databases, search terminology, and search techniques.
Different databases index different literature. Academic Search Complete indexes broad social science and humanities literature. JSTOR provides access to journal back issues. Google Scholar offers free access to academic literature but with less control. PubMed specializes in medical literature. For nonprofit and grants topics, databases like ProQuest Dissertations & Theses, hand-searching nonprofit and philanthropy journals, and searching organization websites (foundations, nonprofit research centers) all contribute relevant sources.
For AI ethics research, you'll likely search multiple databases: computer science databases (IEEE Xplore, ACM Digital Library) for technical work, psychology and social science databases (PsycINFO) for behavioral and social research, philosophy databases (PhilPapers) for ethics work, and general social science databases for nonprofit and policy research.
Effective search requires keywords—terms that capture your research topic. For a review on algorithmic fairness in grant allocation, relevant keywords might include: algorithm, fairness, bias, grant, funding, philanthropy, nonprofit, machine learning, artificial intelligence, allocation, decision-making, distribution. Different databases require different search syntax, but the principle is the same: combining terms to locate relevant literature.
Search iteratively. Your first search often reveals unexpected terminology used in the field. You discover that researchers more often say "equitable" than "fair," or that they discuss "algorithmic impact assessment" rather than "audit." You learn that funding-related research is published in both philanthropy and nonprofit management literatures. You find that ethical concerns appear in both computer science and philosophy journals. Each search informs your next search.
Even focused searches can return hundreds of papers. You need systems to manage this volume. Citation management software like Zotero, Mendeley, or EndNote help you store PDFs, keep track of bibliographic information, annotate sources, and organize materials into thematic folders.
As you review each source, document: the full citation, the research question(s) addressed, methodologies used, key findings, relevance to your topic, and your quality assessment. A simple spreadsheet works, as does specialized software. What matters is having a system you'll actually use—one complex enough to capture what you need but simple enough to maintain.
Not all sources are equally credible. Critical appraisal means evaluating the quality of studies—assessing methodological rigor, appropriateness of methods to questions, researcher expertise, transparency about limitations, and potential biases.
Strong empirical studies: use appropriate methods for their questions, clearly describe their methods so you can judge whether they were implemented well, report relevant statistics with sufficient precision, transparently discuss limitations, and acknowledge alternative explanations for findings.
Weak studies: lack methodological rigor, don't clearly describe their methods, overstate their findings, hide limitations, or selectively report results favoring particular conclusions.
For AI ethics in grants research, you might critically appraise: Does this study actually examine grant allocation or does it only use grants as an example? Were the AI systems studied representative of systems actually used in philanthropy? Were findings based on real data or simulation? Did the authors have conflicts of interest in the systems they studied?
Once you've identified and appraised relevant sources, you must synthesize their findings. Synthesis is more than summarizing—it's organizing findings to address your research question or purpose.
Thematic synthesis groups findings around common themes or concepts. You might organize findings about algorithmic fairness in grants into themes: "What causes unfairness," "Approaches to detecting unfairness," "Strategies for improving fairness," "Organizational barriers to fair algorithms." Each theme brings together relevant findings from multiple sources, revealing patterns and tensions.
Narrative synthesis organizes findings into a coherent story that answers your questions. Rather than thematic categories, you weave together findings to explain a phenomenon: "How do nonprofits currently use AI in grant applications? This section describes purposes organizations cite, then explores what research shows about actual uses versus stated purposes, then examines outcomes." The narrative moves from question through evidence toward conclusions.
In systematic reviews, synthesis often involves creating tables or matrices that allow comparison across studies. One column might show "Study," another "Population," another "Key Findings," another "Quality Rating." This organization makes patterns visible. You can see at a glance: which populations are understudied, whether findings across studies align or conflict, whether stronger studies and weaker studies reach different conclusions.
Literature reviews often reveal conflicting findings. Some studies find algorithmic decision support improves grant allocation fairness; others find it reduces it. How do you make sense of this disagreement?
First, examine whether studies actually conflict or whether they address different questions. One study might find that algorithms improve diversity of applicants recommended; another that recommended applicants are of lower quality. Both can be true if addressing different outcomes. Another study might find algorithms improve fairness in one context but not another, suggesting context matters.
Second, examine study quality. If stronger studies reach different conclusions than weaker ones, the stronger studies' conclusions deserve more weight. If all high-quality studies reach similar conclusions and low-quality studies contradict them, this matters for your synthesis.
Third, consider whether disagreement reflects genuine uncertainty or incomplete evidence. Maybe different findings reflect different operational definitions of "fairness" or different contexts of implementation. Your synthesis should acknowledge this complexity rather than forcing false agreement.
Literature on a topic typically includes quantitative studies (measuring outcomes), qualitative studies (understanding experiences), and theoretical work (analyzing concepts). All contribute valuable knowledge. Your synthesis should integrate these different types of evidence.
Quantitative studies might show that algorithmic recommendations reduce diversity of recommendations compared to human-only review. Qualitative studies might reveal why: algorithms encode historical patterns, and historical funding favored certain organization types. Theoretical analysis might explore what makes this situation ethically problematic and what alternative values could guide algorithms. Together, these provide richer understanding than any single approach.
Literature reviews produce large amounts of information. Visualizing findings helps you understand patterns and communicate findings to others. Tables comparing studies, timelines showing how research has evolved, concept maps showing relationships between ideas, and figures displaying patterns or trends all help organize and communicate.
Literature review writing differs from other academic writing. Rather than developing a single argument throughout, reviews organize findings around themes or questions, exploring how different sources relate to each topic. Good review writing: uses topic sentences that tell readers what the section addresses, synthesizes findings rather than simply summarizing sources, explicitly states how sources relate to each other, identifies gaps and tensions, and connects findings back to your research purpose.
Literature review is a systematic process of discovering, evaluating, and synthesizing published knowledge. Different review types suit different purposes. Quality reviews require good search strategies, systematic organization, critical appraisal of sources, thoughtful synthesis that integrates different types of evidence, and clear communication of findings and gaps.
Conduct a scoping review of research on one aspect of AI in grants (e.g., algorithmic bias in application review, grant matching algorithms, AI use in nonprofit capacity building). Spend 3-4 hours searching multiple databases. Document 15-20 relevant sources. Create a table showing: source, research question, methodology, key findings, relevance to your topic. Write a 3000-4000 word synthesis of what research shows, what gaps remain, and why this matters for the sector.
Your research lab task is to conduct and write a literature review on a grants-and-AI topic of your choice. Your review should demonstrate: effective search strategies identifying relevant sources, critical appraisal showing you evaluated source quality, synthesis organizing findings around themes or questions, integration of different types of evidence, acknowledgment of gaps and limitations, and clear communication of what is known and what remains uncertain.
Literature review is foundational research work. It builds cumulative knowledge by connecting new research to what's already been learned. Strong reviews don't just catalog what others have done—they identify patterns, reveal gaps, highlight tensions, and create foundations for advancing the field. As you conduct or read literature reviews on AI ethics in grants, remember that synthesis is the heart of the work: bringing findings together to answer important questions.
Develop expertise in AI ethics research methodology and practice.
Explore Full Course