Design rigorous, achievable evaluation plans that measure what matters while remaining feasible and budget-conscious.
Evaluation plans reveal whether you've thought seriously about measuring program impact. They show you're committed to learning, willing to be accountable, and realistic about what you can measure. A strong evaluation plan builds funder confidence that you'll deliver on promised outcomes.
Evaluation plans also protect you. By documenting how you'll measure outcomes before you implement, you establish baseline expectations and create accountability for delivering against them. An evaluation plan is both a promise to funders and a roadmap for your team.
Process evaluation assesses implementation fidelity: Did you conduct activities as planned? Did you reach the target number of participants? Were services delivered with quality? Process evaluation answers whether you're executing your program model properly.
Process evaluation typically includes: activity tracking (how many sessions conducted), attendance/participation data (how many participants engaged), quality assessment (did activities meet fidelity standards), and participant satisfaction (did participants experience quality service). Process evaluation is often easier to measure because it tracks what you're doing.
Outcome evaluation assesses whether your program produced the intended changes in participants. Did academic engagement improve? Did social-emotional skills develop? Did employment increase? Outcome evaluation is more complex because causality is harder to establish, but it's central to demonstrating impact.
Strong proposals include both: process evaluation demonstrating you implemented well, and outcome evaluation demonstrating that implementation produced results.
Your evaluation plan must specify how you'll collect data for each outcome. Methods include:
Stronger evaluation plans use multiple data sources. If you measure academic engagement through student self-report alone, it's less credible than combining self-report, school attendance records, and teacher report.
Your plan should specify when you'll collect data. For a one-year program: baseline/pre-assessment at enrollment (month 1), mid-program check (month 6), post-program assessment at exit (month 12). For longer programs: baseline, periodic progress checks, and end-of-program assessment.
Include follow-up timeframe if relevant. Will you measure outcomes at program end? At three-month follow-up? Six-month follow-up? Follow-up demonstrates whether changes persist after the program ends, which is credible evidence of impact.
Your evaluation plan should include specific success benchmarks: What constitutes meaningful improvement? "Participants will improve academic engagement" is vague. "Participants will improve academic engagement by a minimum of 20%, measured by increase in attendance from baseline 75% to 90%, and increase in homework completion from 60% to 80%" is clear.
Benchmarks should be ambitious but achievable. Setting benchmarks at 100% improvement is unrealistic; setting them at 5% seems trivial. Research similar programs to understand what's realistic in your context.
Strong evaluation plans include both process and outcome evaluation, specify data collection methods, establish realistic timelines, and define clear success benchmarks. They demonstrate your commitment to measurement while remaining feasible given your resources.
AI can assist with evaluation plan development:
Critical evaluation decisions require your knowledge of your program:
Your evaluation plan must align directly with your logic model. Every outcome in your logic model should be measured in your evaluation plan. If your logic model claims "participants develop leadership skills," your evaluation plan must specify how you'll measure leadership skill development and what constitutes success.
Create a simple alignment table showing: Outcome (from logic model) | How We'll Measure It | Data Source | Collection Timeline | Success Benchmark. This table ensures every promised outcome has a measurement plan.
If your evaluation plan has any of these flags, revise to make it more feasible. An evaluation plan that's too ambitious often fails—resulting in no data. A plan that's realistic but modest succeeds and builds credibility.
Using your logic model from earlier, create a complete evaluation plan. (1) List each outcome from your logic model. (2) For each outcome, specify your data collection method(s). (3) Establish an evaluation timeline showing when you'll collect baseline, progress, and exit data. (4) Define success benchmarks for each outcome (minimum improvement level). (5) Create an alignment table connecting each outcome to measurement method, data source, and timeline. (6) Budget lines for evaluation activities (staff time, assessment tools, data management). Have a colleague review for feasibility—can you realistically execute this plan?
Strong evaluation plans typically include:
Watch out for these indicators of weak evaluation plans:
If you engage an external evaluator (increasingly expected for larger grants), your evaluation plan should specify:
External evaluators bring objectivity and expertise but add cost. Balance the value of external credibility against the cost, and be realistic about what budget you can allocate to evaluation.
Evaluation should serve learning and accountability, not punitive purposes. Communicate clearly to participants what data you're collecting and how you'll use it. Protect participant privacy in all data collection and reporting. Use evaluation results to improve your program, not to blame staff or exclude participants. Ethical evaluation builds trust with your community.
Solid evaluation plans align with logic models, include both process and outcome measures, use multiple data sources, establish realistic timelines and benchmarks, and remain feasible within your resources. AI helps draft and organize evaluation plans, but your program knowledge determines what's appropriate to measure and how to measure it credibly.
In the next lesson, you'll learn how to customize proposals for government funders with specific compliance requirements, federal language expectations, and SAM.gov considerations.
Master Government Funder Compliance