Build rigorous, funder-aligned logic models while ensuring outcomes remain realistic and verifiable.
Your theory of change and logic model represent your program's core hypothesis: how you believe your activities will produce results. They're the foundation that all other proposal sections rest on. Without a clear, credible logic model, proposals lack coherence—activities don't connect to outcomes, evaluation doesn't measure what matters, and funders question your program logic.
Logic models often feel like a compliance box to check. But sophisticated funders use logic models as a strategic tool to assess program quality. A clear logic model demonstrates you've thought deeply about causality (why your approach will work), measurement (how you'll know), and outcomes (what you expect to achieve).
Inputs are the resources you invest: staff, funding, partnerships, equipment, curriculum. They're not outcomes—they're the assets you activate to implement your program. Strong logic models specify inputs clearly because they establish credibility (you have what you need to do this work) and inform others about program cost structure.
Activities are the specific program actions you undertake. If you're providing mentoring, activities include mentor recruitment, mentor training, youth recruitment, match formation, monthly meetings, and mentor-youth relationship support. Vague activities ("provide mentoring") are less credible than specific activities that explain how mentoring actually happens.
Outputs are the direct, measurable results of activities. If your activity is "conduct monthly mentoring sessions," your output is "1,200 mentoring sessions conducted annually" or "500 youth matched with mentors." Outputs measure effort and reach; they answer "did we do what we said we'd do?"
Outcomes are the changes you expect in participants' knowledge, skills, attitudes, or behavior that result from your program. If your activity is "mentoring sessions" and your output is "500 youth matched," your outcome might be "participants improve academic engagement" or "youth develop relationship and communication skills." Outcomes answer "does our program create the changes we hope for?"
Impact represents longer-term, community-level change resulting from many people experiencing your outcomes. If your outcome is "improved academic engagement," your impact might be "increased high school graduation rates in our community." Impact is what happens when outcomes scale and compound over time.
Logic models connect inputs and activities to specific, measurable outcomes with clear causal pathways. Strong models show you understand what you do, why you do it, and how you'll know it's working.
Provide AI with a description of your program: what it is, who it serves, what activities you conduct. Include information about your staff, resources, partnerships, and program model. AI can draft sections describing your inputs and activities based on this information. You review for accuracy and specificity.
This is crucial and human-driven. You define what outcomes you expect your program to produce. These should be specific and achievable based on your program. Don't let AI generate outcomes—AI will produce generic outcomes that may not match your actual program theory. You decide: "What do we expect participants to be different in because they participated in our program?"
Once you've defined inputs, activities, and outcomes, AI can draft the narrative section explaining your logic model. Provide AI with your program components and ask it to explain the theory of change: "Why do you believe these activities will produce these outcomes? What's the causal logic?"
AI can articulate logic models clearly but should do so based on your direction about what logic your program actually follows. If AI generates a logic model you don't believe in, reject it. Your program's theory of change must reflect your genuine beliefs about how change happens in your work.
This is where the "verification imperative" becomes critical. AI might generate outcomes that sound impressive but are unrealistic, unmeasurable, or misaligned with your program. Common problems:
You verify every outcome: Is this realistic given our program? Is it specific enough to measure? Do we have the capacity and methods to measure it? Can we achieve it within the funding period? If any answer is "no," revise the outcome.
Don't let AI-generated outcomes define your program logic. If AI suggests outcomes you don't genuinely believe your program can achieve, reject them. Overpromising outcomes creates credibility problems and evaluation failures. Your logic model should reflect your honest assessment of what your program produces, not aspirational thinking.
A youth mentoring program can claim outcomes like "improved academic engagement" or "increased social-emotional skills." It cannot credibly claim "increased high school graduation rates" unless graduation is directly measured and your program is the primary driver. Graduation has many influences; your program contributes but doesn't solely determine it.
The solution: Distinguish between outcomes you directly produce (skills development, attitudes, behaviors you actively teach) and outcomes you contribute to (graduation, employment, health—where you're one influence among many). Claim contribution to broader outcomes only if your evaluation can demonstrate your program's specific influence.
Don't promise outcomes you've never measured before and have no evidence your program produces. If you're claiming "improved academic engagement," do you have preliminary data showing participants actually improve academic engagement? Or are you claiming it based on theory alone?
Solution: Ground outcomes in either previous experience ("our prior program iteration produced X outcome") or preliminary evidence ("our pilot served 20 youth and 85% showed improvement in academic engagement measures"). New programs should claim more modest outcomes until you generate evidence.
"Improved engagement" is vague. What's engagement? How do you measure it? Better: "Participants increase class attendance by 20% (from 75% to 90% average) and complete 90% of assigned homework (measured via school records and participant report)." Specificity demonstrates you understand what you're measuring.
Using your proposal project from Lesson 6.1, develop a complete logic model for your program. (1) List your inputs (staff, funding, partnerships, curriculum). (2) Describe your key activities. (3) Define your expected outputs (how many participants, how much service). (4) Define 3-5 specific, measurable outcomes you expect. For each outcome, identify how you'll measure it. (5) Using AI, draft a 300-400 word Theory of Change narrative explaining why you believe your activities produce your outcomes. (6) Review the draft critically: Is the outcome logic sound? Are outcomes realistic? Would a skeptical funder believe this?
Quality AI output depends on quality prompting. Here's how to prompt AI for strong logic models:
Poor prompt: "Write a logic model for our mentoring program."
Better prompt: "I'm writing a logic model for our mentoring program serving low-income youth in urban schools. Our program provides trained volunteer mentors who meet monthly with matched youth. We expect outcomes in academic engagement, relationship skills, and career awareness. Draft a logic model narrative explaining how monthly mentoring relationships produce these outcomes. Keep it to 300 words. Use clear, specific language. Avoid generic phrasing."
The second prompt gives AI context, specificity, length guidance, and tone direction. It produces better results.
Different funders want different levels of detail in logic models:
You may maintain one master logic model, then adapt presentation for specific funder preferences rather than inventing different program logics.
When reviewing your logic model (whether AI-generated or human-written), watch for these red flags:
If your logic model has any of these flags, revise it. A weak logic model undermines the entire proposal.
Logic models connect your activities to specific outcomes through clear causal reasoning. AI can help draft and articulate logic models, but you must verify that outcomes are realistic, measurable, and genuinely central to your program. The best logic models reflect your honest assessment of your program's impact, not aspirational thinking.
In the next lesson, you'll learn how to blend storytelling, data, and community voice into compelling narratives that bring your logic model to life while maintaining analytical rigor.
Learn Advanced Narrative Techniques