As artificial intelligence becomes increasingly prevalent in the nonprofit and grants sector, the ability to rigorously study its ethical implications has never been more important. Organizations deploying AI systems—whether for grant matching, application review, or constituent communication—must understand not only how these systems function, but also what their societal impacts are. Research methodologies provide the systematic tools to answer these questions with credibility and rigor.
This lesson explores the diverse methodological approaches available to nonprofit professionals, grantmakers, and researchers investigating AI ethics. You'll learn how to design research that produces actionable insights, whether you're conducting a small organizational audit or contributing to the broader scholarly conversation about AI in philanthropy.
Sound research begins with clear research design. A research design is the overall strategy and structure for investigating a question. For AI ethics research, your design must address fundamental questions: What am I trying to understand? Why does it matter? How will I know if my answer is correct? What methodologies are best suited to this question?
The foundation of any research design rests on your research questions. Rather than vague inquiries ("Is this AI system ethical?"), effective research questions are specific and answerable: "What percentage of applicants from historically excluded communities report that this AI-assisted review process was transparent?" or "What are the perceived risks of bias in algorithmic grant matching according to nonprofit leaders?"
Your research questions should align with your organization's priorities and constraints. A grant program officer has different needs than an academic researcher. A research question appropriate for a doctoral dissertation may be different from one suitable for a focused organizational assessment conducted in a few weeks.
Quantitative methods emphasize numerical measurement, statistical analysis, and generalizable findings. In AI ethics research, quantitative approaches help answer "how many," "how much," and "to what extent" questions.
Surveys are efficient tools for gathering data from large groups. A carefully designed survey might measure how nonprofit staff perceive the fairness of an AI hiring recommendation system, or gather usage statistics about an AI-powered grant writing assistant. Effective surveys require precise question design, appropriate response scales, and strategic distribution to reach target respondents.
Survey design for AI ethics research involves particular challenges. Respondents may lack technical knowledge about how algorithms work, requiring clear explanations of the system being evaluated. Questions must be specific enough to capture nuance but not so detailed that respondents become overwhelmed. Pilot testing with a small group before full deployment helps identify and resolve these issues.
Auditing AI systems involves analyzing their inputs, outputs, and decision patterns to identify potential ethical concerns. A grants organization might audit a matching algorithm to examine whether recommendations vary systematically across different organization types, funding priorities, or demographic characteristics.
Effective algorithmic audits require access to system data and sufficient technical expertise. Questions might include: Does this system recommend different results for substantively identical applications? Are certain categories of applicants systematically less likely to receive high recommendation scores? Do error rates vary across population groups?
While quantitative methods measure, qualitative methods explore meaning. These approaches are essential for understanding how stakeholders experience AI systems and what concerns they have about ethics and fairness.
In-depth interviews allow researchers to explore stakeholder experiences in detail. A nonprofit might interview grant applicants about their experience with an AI-assisted screening process, allowing stories and contextual understanding to emerge. Focus groups bring together multiple stakeholders to discuss topics collectively, surfacing different perspectives and enabling dialogue.
Interviews and focus groups for AI ethics research should explore not just whether systems work, but how they affect decision-making, which voices are heard, and what concerns stakeholders have. Questions might explore: How did the AI system's recommendation influence your final decision? What concerns do you have about how this system handles different types of organizations? What would increase your trust in this system?
Case studies examine specific instances—a particular grant program, organization, or implementation—in depth. An ethnographic approach might involve sustained observation of how staff actually use an AI system in practice, revealing gaps between intended use and actual practice.
Ethnographic methods are particularly valuable for understanding organizational context and how systems affect real work. How do staff members adapt around system limitations? What workarounds develop? How does the system change organizational culture and decision-making norms?
Mixed-methods research combines quantitative and qualitative data. You might survey hundreds of nonprofits about their AI adoption (quantitative) and then interview twenty of them in depth (qualitative) to understand why they made those choices. This combination provides both breadth and depth.
Participatory action research actively involves community members and stakeholders in the research process itself. Rather than external researchers studying a nonprofit, participatory approaches might involve nonprofit staff as co-researchers investigating their own AI systems. This builds organizational capacity while ensuring research addresses questions stakeholders actually care about.
Critical race theory, feminist research approaches, and other justice-oriented methodologies emphasize how power, inequality, and historical context shape AI systems and their impacts. These approaches ask: Who has the power to develop, deploy, and benefit from these systems? Whose perspectives are centered and whose are marginalized? How do systems reflect and reinforce existing inequalities?
For grants organizations, critical perspectives might examine: Are funding recommendations channeling resources toward organizations and populations that have historically been supported, or toward those with greatest need? How do assumptions built into matching algorithms reflect the values and blind spots of algorithm designers?
AI ethics research can be empirical (studying what actually happens) or philosophical (examining what should happen). Both are valuable. Empirical ethics research might study: What do nonprofit leaders believe constitutes fair AI use in grant reviews? How do different AI systems affect diversity in funding outcomes?
Philosophical ethics approaches engage normative questions about values: What do we owe applicants in terms of transparency? What constitutes just allocation of resources? Philosophical analysis helps clarify what values should guide AI systems in the grants sector.
Regardless of methodology, sound research requires good data management. Document your sampling procedures, data collection protocols, and analysis decisions. Maintain secure storage for sensitive information. Keep detailed notes about the research process—decisions made, challenges encountered, how problems were resolved.
Quality matters more than quantity. A thoughtful interview with five nonprofit executives who deeply know AI deployment in their organizations produces better insights than superficial responses from fifty people. Similarly, analyzing a carefully selected algorithmic audit produces more useful results than an unfocused analysis of everything available.
Research methodologies provide systematic approaches to studying AI ethics in nonprofits. The most appropriate methods depend on your specific research questions, resources, and stakeholder needs. Combining quantitative measurement, qualitative understanding, and critical analysis produces the richest insights.
For this lesson's research lab, you'll design a complete research study investigating an AI ethics question relevant to your organization. Your design should include: your research question(s), methodological approach, sample or participant description, data collection procedures, analysis plan, and timeline. If you're preparing to study an AI system your organization uses or plans to use, ground your design in that reality. If not, choose a hypothetical scenario relevant to the grants sector.
Your design should demonstrate understanding of why particular methods suit your questions. Why interviews rather than surveys? Why quantitative analysis alongside qualitative observation? How will you ensure diverse stakeholder voices are represented? How will you manage ethical considerations like informed consent and data privacy?
Strong research methodologies transform important questions about AI ethics into actionable understanding. Whether you're auditing a system your organization uses, evaluating a vendor's proposal, or contributing to the broader scholarship on AI in philanthropy, methodological rigor matters. The approaches you've learned in this lesson—quantitative measurement, qualitative exploration, critical analysis, and participatory methods—provide tools to investigate what matters most about how AI shapes the nonprofit sector's future.
Master AI governance, ethics, and strategy across the entire grants lifecycle.
Explore Full Course