Many grant professionals view logic models and evaluation plans as technical requirements—boxes to check rather than strategic opportunities. This is a significant mistake. For sophisticated funders (government agencies, major foundations, research-focused funders), your logic model and evaluation plan reveal whether you actually understand your program's theory of change and whether you can realistically measure what matters.
A weak logic model signals fuzzy thinking about how your program creates change. A vague evaluation plan suggests you won't actually know whether your program works. A well-constructed logic model and evaluation plan, conversely, demonstrate strategic sophistication and accountability. They're often the difference between competitive and non-competitive proposals.
The good news? Logic models and evaluation plans follow predictable structures. Once you understand these structures, you can engineer prompts that guide AI tools to produce excellent work in these complex areas.
A logic model is a visual or narrative representation of how your program works. It shows the chain of cause-and-effect from what you put into your program (inputs) through what you do (activities) to what you produce (outputs) to what changes as a result (outcomes) to the broader impact (impact).
The resources your program requires: staff, funding, partnerships, space, materials, volunteers. "Our program requires 1.0 FTE Program Director, 2.0 FTE case managers, $75,000 annual budget, office space, partnership with the school district." Inputs are realistic and specific to your program.
The program activities and services. "We conduct comprehensive intake assessments, develop individualized service plans, provide weekly counseling, facilitate peer support groups, connect participants to job training." Activities are concrete and implementable with your inputs.
The direct products of your activities. "95 participants complete intake, 80 develop service plans, 70 complete counseling series, 65 secure job training enrollment." Outputs are measurable and directly result from your activities.
The short-term changes participants experience (usually 6-12 months). "Participants increase financial literacy, participants gain job skills, participants report increased hope and agency, participants identify career goals." These are early indicators of progress.
The sustained changes (12+ months or after program exit). "Participants secure stable employment, participants maintain housing, participants remain employed one year later, participants earn living wage or better." These outcomes demonstrate lasting impact.
The community-level change that results from your program at scale. "Reduced homelessness in the community, increased economic stability for justice-involved individuals, reduced recidivism." Impact acknowledges that your program is part of a larger change system.
A common mistake in AI-generated logic models is unrealistic outcomes. The AI tool, trying to be helpful, generates outcome statements that sound good but that you'll never achieve. Your job as the prompt writer is to constrain the logic model to what's realistic given your actual resources, context, and theory of change.
Context: [Your organization name, the population you serve, the problem you address, your program (name and primary activities), annual participant numbers, program duration, your actual staff capacity, what you currently achieve or have achieved in the past]. Be specific about what's realistic: "We serve 60 participants annually. 85% complete the program. 70% are employed 6 months after program exit. These are our actual current outcomes, not aspirational targets."
Role: Acting as an experienced program evaluator who understands realistic outcomes and can create logic models that are ambitious but achievable.
Action: Create a logic model that shows the realistic chain of cause and effect from our inputs through our activities to our actual measurable outcomes. Make it ambitious but grounded in what we can actually achieve with our resources and in our context. Use actual data about what we currently achieve.
Format: Create the logic model in [table format / narrative format / five-box visual format]. Include these columns/sections: Inputs, Activities, Outputs, Short-term Outcomes, Long-term Outcomes, Impact. Each section should include 3-4 specific elements with realistic metrics where applicable.
Tone: Confident but realistic. Show we understand what we can actually accomplish. Avoid overselling; present what we genuinely achieve.
Key Prompt Adjustments for Different Program Types:
The best logic models are grounded in reality. Your prompt should emphasize the actual outcomes you achieve, the realistic constraints of your context, and the genuine limitations of your resources. Funders respect honesty about what's achievable far more than inflated promises you won't meet.
An evaluation plan answers: What are we trying to find out? How will we find it out? Who will collect the data? When will we collect it? How will we analyze and use it?
What do you actually want to know about your program? Examples: "Are participants gaining job skills?", "Do participants maintain housing after program exit?", "Are we reaching the populations we intend to serve?", "What's most valuable about the program from participants' perspectives?" Good evaluation questions are specific and answerable.
How will you answer your evaluation questions? Pre/post surveys, focus groups, interviews, administrative data review, standardized assessments, observations. Each method has tradeoffs between rigor and feasibility. Your prompt should emphasize methods you can actually implement.
Where does the data live? Program participant surveys, staff interviews, school records, employment databases, medical records, participant case files. Data sources affect what you can realistically access and track.
When will you collect data? At program start, midpoint, completion, follow-up periods? Be realistic about what data collection is feasible given your staff time and resources.
How will you make sense of the data and use it to improve the program? Who will analyze it? How will you share findings internally and with funders? This demonstrates that evaluation will actually inform decisions.
Common mistakes in AI-generated evaluation plans: too ambitious given resources, too many evaluation questions, data collection methods that are expensive or require expertise you don't have, no clear connection between evaluation and program improvement. Your prompt constrains the evaluation to what's realistic.
Context: [Your program, the outcomes you want to measure, your current evaluation capacity (who has time for evaluation work), your budget for evaluation, existing data systems you use, any prior evaluation experience, any data you already collect]. Be specific about constraints: "We have a part-time evaluator (0.25 FTE). We use a basic database for client tracking. We've never conducted focus groups. We need evaluation methods that don't require hiring additional expertise."
Role: Acting as an evaluator who specializes in creating practical, feasible evaluation plans for small nonprofits with limited resources.
Action: Develop a realistic evaluation plan that measures [your primary outcomes]. Focus on methods we can actually implement with our staff and budget. The plan should answer these specific evaluation questions: [list 3-4 key questions]. Include methods we're equipped to use, data sources that are accessible to us, and a timeline that works with our capacity.
Format: Organize as: Evaluation Questions, Data Collection Methods (with specific details about what we'll collect), Timeline (when we'll collect data), Data Analysis Plan (what we'll do with the data), and Use Plan (how we'll use findings). Use short paragraphs and bullet points. Keep total length to 400-500 words.
Tone: Realistic and practical. Emphasize what we can actually measure well rather than what we wish we could measure. Show clarity about feasible evaluation within our constraints.
Example Evaluation Plan Prompts for Different Scenarios:
| Program Type | Key Adjustment to Template |
|---|---|
| Direct service (mental health, case management) | Include client perception data and clinical assessment tools. Add timeline for collecting data after program exit. |
| Education/training | Focus on skill assessments, completion rates, credential attainment. Include post-program employment or continued education tracking. |
| Prevention/public health | Include reach metrics (how many people accessed the program) and behavior change indicators. Acknowledge that some outcomes take time to appear. |
| Community organizing/advocacy | Include participation metrics, leadership development measures, and policy/system change indicators. Acknowledge challenge of attribution. |
| Very small program (under 20 annual participants) | Focus on depth over breadth. Individual interviews or case studies instead of surveys. Small n interviews are rigorous and feasible. |
Logic Model Focus: Chain from training activities through job placement to employment stability and wage growth.
Evaluation Plan Focus: Job placement rate (60 days post-program), job quality (wage level, benefits), job retention (still employed 6 months later), skill development (pre/post assessments).
Prompt Adjustment: "This is a 12-week intensive program. We serve 40 participants annually. Historically, 80% complete the program, 70% are placed in jobs within 60 days, 65% are still employed 6 months later. These are our realistic targets, not aspirational. Create a logic model and evaluation plan grounded in these actual outcomes."
Logic Model Focus: Chain from mentor matching through relationship development to academic/social outcomes.
Evaluation Plan Focus: Match quality, relationship development, academic engagement, school attendance, participant confidence/belonging.
Prompt Adjustment: "This is a long-term program (2+ years per participant). We match 30 youth with mentors annually. We need evaluation methods that can track relationship development over time. Include measures of what mentees value in the relationship, not just academic metrics."
Logic Model Focus: Chain from assessment through treatment to symptom improvement to functioning/independence.
Evaluation Plan Focus: Symptom reduction (using validated scales), functional improvement (employment, housing, relationships), participant-reported wellbeing, access equity.
Prompt Adjustment: "This is a clinical program. We use specific assessment tools (PHQ-9, GAD-7). We can measure symptom change reliably. We want to also capture how participants experience their own progress and what's meaningful to them. Create evaluation that balances clinical rigor with participant voice."
Select the grant proposal you've been working with. Gather information about your actual program outcomes (not aspirational targets). Write a prompt requesting a logic model using the template above, customized with your specific program data and realistic outcomes. Generate the logic model. Review it for realism. Adjust if needed. Then do the same with an evaluation plan prompt. Compare the AI-generated versions to any existing logic model or evaluation plan. Note where the AI version is stronger and where you need to adjust.
Problem: Outcome numbers that don't match your context. Solution: Include your actual outcomes in the context. "We currently achieve 70% job placement. Please don't suggest higher outcomes without explanation."
Problem: Too many activities in the activities section. Solution: In your prompt, list your actual 4-5 core activities. "Our program consists of these activities: [list them]. Please build the logic model around these specific activities."
Problem: Outcomes that are too vague. Solution: Request specific, measurable outcomes. "Include specific metrics or indicators for each outcome, not just outcome statements."
Problem: Logic model that doesn't reflect your theory of change. Solution: Explicitly state your theory of change in the prompt. "We believe [specific theory of change]. Please ensure the logic model reflects this theory."
Never submit a logic model or evaluation plan that you don't understand and agree with. These documents represent your program's truth to funders. If the AI-generated version doesn't accurately reflect how your program actually works or what you actually achieve, revise it. The purpose of the prompt is to make the AI's work more useful, not to replace your professional judgment.
As you generate logic models and evaluation plans using these prompts, save the best versions. You'll likely use elements of them repeatedly in future proposals to similar funders or for similar programs. Over time, you'll develop a small library of logic model and evaluation templates that capture your program's reality from different angles.
This library becomes incredibly valuable. When you're on a tight deadline for a new grant, you can pull an existing logic model, adjust it for the new funder's emphasis, and have something substantially complete within hours instead of days.
In Lesson 3.5, you'll learn prompt patterns for budget narratives and letters of intent—the sections where you position your organization and justify your costs.
Continue to Lesson 3.5