In This Article
- Why is impact measurement still broken?
- What's the difference between outputs, outcomes, and impact?
- How should foundations design measurement frameworks?
- How can we make measurement collaborative, not extractive?
- How do we aggregate impact across a portfolio?
- How should we report impact to boards and stakeholders?
- What do we do when impact can't be measured?
Why is impact measurement still broken?
After decades of impact investing, outcome measurement, results-based accountability, and data-driven philanthropy, we're still asking the same question: Did our grants actually work?
The measurement crisis isn't a lack of tools or frameworks. Foundations have sophisticated platforms, consultants, and methodologies. The crisis is that impact measurement has become a compliance burden rather than a learning opportunity. Grantees spend months completing questionnaires designed to satisfy funder requirements rather than inform their own work. Foundation staff review hundreds of reports that measure everything except what actually matters. Boards receive dashboards full of data but little insight into whether their capital is creating the change they intended.
This happens because we've built measurement systems backwards. We start with what's easy to count, then we count it.
Impact measurement works when it serves a purpose beyond compliance. That purpose might be:
- Learning: Understanding what works in your field so you can make better grant decisions
- Accountability: Demonstrating to your board and donors that your grants are achieving their intended change
- Grantee development: Helping nonprofit partners improve their own programs and operations
- Evidence generation: Building a body of evidence about solutions to specific problems
- Adaptation: Knowing when to shift strategy or double down on what's working
Most foundations try to accomplish all five simultaneously with a single measurement system. That's why measurement fails. A system designed for learning doesn't generate the clean data boards want for accountability. A system designed for accountability often creates perverse incentives that distort how nonprofits operate. A system designed to help grantees improve rarely produces the standardized metrics needed for portfolio-level analysis.
The first step in fixing impact measurement is accepting that you need different measurement approaches for different purposes.
What's the difference between outputs, outcomes, and impact?
This framework has been around for decades, yet foundation staff and nonprofit leaders still conflate these terms in ways that undermine the entire measurement process. Getting clear on definitions isn't academic—it determines what you measure, how you measure it, and what you can actually conclude.
Activities are what your grantee does. Outputs are what's directly produced by those activities. Outcomes are changes in behavior, knowledge, attitude, or skill. Impact is sustained change in the condition or status of an individual or community.
Here's a concrete example from workforce development:
- Activity: Operating a coding bootcamp
- Output: 150 individuals complete the 12-week program
- Outcome: 120 graduates (80%) secure employment in tech within 6 months
- Impact: Graduates earn 40% more than they did before, remain in tech roles for 3+ years, and have moved into supervisory positions
Most foundations measure activities and outputs obsessively because they're easy to count. A nonprofit reports it served 1,500 clients. Delivered 45 workshops. Reached 8 schools. These numbers are clean, unambiguous, and reportable. But they tell you almost nothing about whether the nonprofit's work actually improved lives.
The measurement challenge shifts dramatically when you move up the pyramid:
| Level | Time to Measure | Cost to Measure | Key Challenge |
|---|---|---|---|
| Activities | Immediate | Low | Measures effort, not effect |
| Outputs | Weeks | Low | Doesn't measure change |
| Outcomes | Months | Moderate | Requires comparison group or baseline |
| Impact | Years | High | Isolating your program's contribution |
A crucial insight: not every grant needs to measure impact. If you're funding an emergency response, you care about outputs (food distributed, families sheltered, lives saved). If you're funding workforce development, you need to measure outcomes (employment, earnings, retention). If you're funding education reform, you might need impact metrics (long-term earnings, health, civic engagement).
Your measurement approach should match your theory of change and your grant's timeframe. A 2-year grant shouldn't attempt to measure 10-year impact. A $50,000 grant shouldn't require a $15,000 evaluation.
How should foundations design measurement frameworks?
Effective measurement frameworks start with strategy, not metrics. Most foundations reverse this order: they define metrics first, then try to fit them into a theory of change. This produces measurement systems that are technically sound but strategically inert—they measure things that don't matter.
Here's the right sequence for designing a measurement framework:
1. Define Your Theory of Change
What do you believe needs to happen for your intended change to occur? Not what you hope happens—what must happen?
- If-then statements
- Causal assumptions
- Timeline expectations
- External context factors
2. Identify Key Questions
What do you need to know to improve your grantmaking? What decisions depend on this knowledge?
- Strategy questions
- Learning questions
- Accountability questions
- Decision-driving questions
3. Select Indicators
Only after you know what you want to learn should you choose how to measure it.
- Leading indicators
- Lagging indicators
- Outcome proxies
- Quality measures
4. Design Data Collection
How will data be collected, by whom, and how frequently? Build from existing systems.
- Primary research
- Administrative data
- Secondary data
- Survey instruments
5. Establish Data Standards
How will data be quality-checked, stored, and analyzed? What's the timeline?
- Data definitions
- Quality thresholds
- Analysis approach
- Reporting calendar
6. Plan for Learning
How will findings inform decisions? When will you review data and adjust strategy?
- Review schedule
- Decision rules
- Adaptation protocol
- Feedback loops
A practical example: A foundation funding education innovation wants to know whether its grants are building the capacity of school districts to sustain reforms after funding ends. This requires a different framework than a foundation funding direct service.
Their theory of change: Districts with external funding will adopt innovations, build internal capacity through training and systems change, and institutionalize reforms in their own budgets.
Key questions: Are funded districts adopting the innovation? Are district leaders and staff developing the skills to manage it independently? Are districts allocating own resources to sustain the work?
Indicators: Implementation fidelity scores, staff training completion, internal funding allocated, retention of trained staff, integration with district initiatives.
Data collection: Annual implementation assessments, staff surveys, budget analysis, case studies with 5-6 districts, interviews with district leaders.
Timeline: Baseline year 1, interim assessment year 2, sustainability assessment year 3, long-term follow-up year 5.
This framework is actually simpler than most foundations use, yet far more useful because every element connects to strategic questions.
How can we make measurement collaborative, not extractive?
The word "burden" appears in almost every conversation with nonprofits about foundation measurement requirements. This isn't accidental. Most foundations design measurement systems from the top down: what foundation leaders want to know, translated into grantee reporting requirements.
From a nonprofit's perspective, this is extraction—the foundation is taking data without reciprocal value. The nonprofit completes detailed reports that inform foundation decision-making while providing no insight into their own program effectiveness. Measurement becomes a condition of funding rather than a tool for improvement.
Collaborative measurement flips this dynamic. The foundation and grantee design measurement approaches together, with explicit agreements about:
- Shared learning goals: What do both parties need to know about program effectiveness?
- Data ownership: Who owns the data, and how can it be used?
- Burden sharing: How are measurement costs allocated?
- Timeliness: When do grantees get their own data back?
- Confidentiality: How is sensitive data protected?
- Adaptation: How can measurement approaches evolve as programs learn?
The Measurement Conversation
Collaborative measurement starts with a structured conversation instead of a compliance form. Rather than sending grantees a data collection template, foundation program officers might ask:
"What's the most important outcome you're trying to achieve with this grant? How would you know if your program was working? What decisions would that information help you make? How could we design measurement that serves your learning while also helping us understand our portfolio's effectiveness?"
This conversation often reveals misalignments. A nonprofit might be focused on outcomes the foundation never considered. The foundation might want data on a different timeline than what's operationally feasible. The grantee might already be collecting data in a format that doesn't match the foundation's requirements.
Collaborative measurement processes account for these tensions explicitly. They might produce agreements like:
Extractive vs. Collaborative Measurement
| Extractive Measurement | Collaborative Measurement |
|---|---|
| Foundation designs measurement system | Foundation and grantee design together |
| Data flows one direction (to funder) | Data flows both directions |
| Grantee sees data in foundation reports | Grantee receives own data first for internal learning |
| Measurement serves accountability only | Measurement serves learning and accountability |
| Fixed metrics throughout grant period | Flexible approach adapted as program evolves |
| Burden falls entirely on grantee | Burden and cost are shared |
| Program staff see measurement as compliance | Program staff see measurement as improvement tool |
| Misalignment often goes undetected | Misalignments surface and get addressed |
Collaborative measurement requires time. That's an investment many foundations avoid. But the time invested upfront—in designing measurement together—dramatically reduces the time wasted later on non-responsive data, confused reporting, and measurement systems that don't serve anyone's actual needs.
How do we aggregate impact across a portfolio?
One of measurement's greatest promises is showing foundation boards what impact their grants collectively generate. The challenge: portfolios are almost always too diverse for simple aggregation.
A foundation funding education, economic development, and health simultaneously can't reduce all grants to a single "number of lives improved" metric. The grants operate on different timelines, measure different outcomes, and serve different populations. Forcing aggregation would require either oversimplification or abandoning meaningful outcomes for what's countable.
Effective portfolio reporting accepts this diversity and works with it. Instead of false aggregation, it organizes findings around strategic questions:
Portfolio Impact Dimensions
- Strategic reach: Are we funding the best-positioned organizations to drive change in this issue area?
- Approach diversity: Are we funding a portfolio of approaches or betting too heavily on a single strategy?
- Capacity building: Are our grants strengthening the field's infrastructure and talent?
- Evidence generation: Are we building knowledge about what works?
- Sustainability: Are grantees positioned to sustain this work after funding?
- Equity: Are benefits distributed equitably across populations?
- Systems change: Are we catalyzing policy, practice, or resource shifts beyond our grants?
Instead of rolling outcomes into a single number, portfolio reporting describes each dimension with the evidence available:
Evidence: Of 18 program directors trained through our grants over 5 years, 15 remain in education roles—14 in leadership positions. Four have secured external funding to expand their work independently. Two are now training other organizations. The program director retention rate is 83%, compared to 60% for education leaders nationally.
Implication: Our capacity-building grant strategy appears to be working. We're developing leaders who stick with this work and multiply their impact. This suggests we should continue investing in professional development while exploring whether we can increase the diversity of program directors we support.
Aggregation Strategies That Work
Outcome clustering: Group grants by outcome category and report aggregate data for each cluster. A foundation funding economic mobility might cluster grants into: income support, job training, educational access, financial capability. Report outcomes for each cluster separately, not as a single number.
Contribution analysis: Rather than trying to isolate your grants' unique effect, describe the contribution your grants made to broader changes. What changed in the systems or fields you're funding? What role did your grants play? What else contributed?
Outcome hierarchies: Show both immediate outcomes (what grantees directly achieved) and intermediate outcomes (what grantees enabled others to achieve). A grant funding policy advocacy might have immediate outcome (policy passed) and broader outcome (regulations changed, affecting millions).
Case studies and typologies: Instead of aggregating all grants, select diverse grantees and document their impact stories in depth. Use cases to illustrate what success looks like in different contexts.
Leading indicators at portfolio level: If long-term impact takes years to materialize, track portfolio-level leading indicators now. Grantee organizational health, partner engagement, community ownership—these predict eventual impact and can be tracked across grants.
How should we report impact to boards and stakeholders?
Board reporting is where measurement often fails most spectacularly. Foundations spend months collecting rigorous data, then summarize it into a dashboard that obscures everything important and overweights what's trivial.
Effective board reporting answers the strategic questions foundation leaders actually need to answer. It acknowledges complexity. It highlights both successes and stubborn problems. It connects data to strategy and decisions.
Board Report Structure
1. Strategic context (2-3 pages)
What change are we trying to drive? What's happened in this field since our last report? What assumptions about our strategy have held up or shifted?
2. Grant-level findings (3-5 pages)
What have individual grantees achieved? What have we learned about effective approaches? What surprises or concerns have emerged?
3. Portfolio analysis (3-5 pages)
Across all grants in this strategy, what's working? Where are gaps? Are there emerging patterns?
4. Implications and decisions (2-3 pages)
What do these findings mean for our strategy? What decisions does the board need to make? What's working that we should double down on? What's not working that we should change?
5. Appendices
Data, methodology, case studies, grantee spotlights. Board members who want details can find them. The main report stays accessible.
This structure prioritizes narrative over numbers. Numbers appear when they illuminate—not as substitute for thinking.
Common Reporting Mistakes
We report outputs and call them impact
Board receives data on how many people were served, workshops delivered, schools reached. This isn't bad data—it just isn't impact data.
We include all available data regardless of relevance
Board drowns in charts and metrics instead of focusing on what matters. Dashboard fatigue sets in. Real insights get buried.
We report only success stories
Board never learns about grants that underperformed, assumptions that were wrong, or course corrections needed. Decision-making becomes based on incomplete information.
We connect data to strategy and decisions
Board understands what's changing, why it matters, what's working, what's not, and what decisions they need to make. Data serves leadership.
The best board reports often include what didn't work or what we learned was different from expectations. This isn't weakness—it's credibility. Boards that only hear success stories trust reports less. Boards that hear honest reflection trust more and make better decisions.
What do we do when impact can't be measured?
Many important changes are difficult or impossible to measure directly. How do you measure whether a grant advanced justice? Whether it shifted systems? Whether it built social movements? Whether it prevented something bad from happening?
When direct impact measurement is infeasible, foundations have other options besides guessing.
Proxies and Leading Indicators
If long-term impact takes years to materialize, measure progress toward impact. A grant working on criminal justice reform might measure:
- Increase in public support for reform (survey data)
- Media coverage of reform issues (media analysis)
- Number of elected officials who've committed to specific reforms (political tracking)
- Bills introduced or amended (legislative tracking)
- Participation in advocacy coalitions (partner engagement)
None of these directly measures reduced incarceration or improved justice outcomes. But together, they provide reasonable evidence that you're building the conditions for change.
Contribution Analysis
Rather than claiming you caused specific change, describe what changed and what role your grant played. This is honest about complexity while still providing accountability.
Example: "Three states reformed bail practices this year. Our grants supported the advocacy coalition in two states and provided policy research that influenced federal guidance. We didn't cause these changes alone—advocates, officials, media, and public opinion all contributed. But our grants were a necessary part of the change."
Theory of Change Validation
When you can't measure final outcomes, test whether your theory of change is playing out. You believe reform happens through: advocacy → media attention → public awareness → political pressure → legislative change.
Measure each step. Do grantees have capacity to advocate effectively? Is media covering their issues? Are constituents becoming aware? Are elected officials responding? If each step is working, you have reasonable confidence in the path to impact, even if final impact takes years to see.
Qualitative Evidence
Some impact is best understood through stories and depth. A grant working on transforming how social services are delivered might measure through:
- In-depth case studies with service users, staff, and administrators
- Interviews with field leaders about what's changing
- Documentation of program evolution
- Narrative accounts of decision-making processes and choices
These aren't rigorous in traditional experimental sense, but they provide deep understanding that numbers often obscure.
Transparency About Limitations
The most honest approach: be explicit about what you can and can't measure. Don't pretend rough estimates are precise. Don't claim causation when you're only measuring association.
Ready to Transform Your Foundation's Measurement Approach?
Measuring grant impact effectively is a learned skill. Start by identifying which strategic questions your foundation needs answered, then design measurement approaches that serve those questions—not the reverse.
Key Takeaways
Measurement serves strategy
Design measurement systems around the strategic questions your foundation needs answered, not around what's easy to count.
Different purposes need different approaches
Don't expect a single system to serve learning, accountability, and grantee development simultaneously. Define your primary purpose and design around it.
Collaborative beats extractive
When foundations and grantees design measurement together, the result serves both parties better and reduces the experience of measurement as burden.
Honest beats impressive
A measurement report that acknowledges complexity and uncertainty builds more trust and supports better decision-making than one offering false certainty.