Introduction: Why Nonprofit Leaders Need to Understand AI

If you've been working in nonprofit grants for the past few years, you've likely heard the phrase "AI is going to change everything." It's easy to roll your eyes. Every year brings a new technology that supposedly transforms the sector. But this time, it's different—not because AI is magic, but because it's genuinely useful for the specific work grant professionals do every day.

Before you can use AI effectively in your grant writing and fundraising, you need to understand what it actually is, what it can and cannot do, and why it sometimes produces results that seem brilliant and sometimes result in obvious nonsense. This lesson strips away the hype and builds a foundation of real understanding.

You don't need to be a computer scientist. You just need to understand the basics—how these tools work, what they're really doing under the hood, and why that matters for your credibility and effectiveness as a grant professional.

What Exactly Is Artificial Intelligence?

Let's start with the simplest possible definition: Artificial Intelligence is a computer system trained to recognize patterns in data and use those patterns to make predictions or generate outputs.

That's it. No magic. No consciousness. No sentience. Just pattern recognition at scale.

Think about how you learned to recognize a dog. As a child, you saw many dogs—dogs of different sizes, colors, and breeds. Your brain recognized the patterns: four legs, fur, a tail, a specific way of moving. Now, when you see any dog, even one you've never encountered before, you instantly recognize it. That's pattern recognition.

AI systems do essentially the same thing, but with vastly larger datasets and different kinds of patterns. An image recognition AI might be trained on millions of photos and learn to identify patterns like "shapes and colors that indicate a dog." A language AI might be trained on billions of text documents and learn patterns about how words relate to each other.

Three Types of AI (And Why It Matters)

When people talk about AI in general, they're usually conflating three different things. Let's break them apart:

1. Rule-Based Systems (Traditional AI)

These are computers doing exactly what humans tell them to do, with explicit rules. If someone's age is over 65, classify them as "senior." If a grant deadline is within 30 days, flag it for urgent attention. These systems are reliable but rigid—they only do what they're programmed to do.

2. Machine Learning (ML)

Instead of hand-coding rules, machine learning systems learn patterns from data. You give the system examples (training data), and it figures out the patterns. A machine learning system might learn: "Grants with these characteristics tend to succeed; grants with these characteristics tend to fail." It's more flexible than rule-based systems because it adapts to patterns humans might not have explicitly noticed.

3. Generative AI (What Everyone's Talking About)

Generative AI systems are trained to generate new content—text, images, code, etc.—based on patterns in their training data. ChatGPT, Claude, and similar systems fall into this category. They're trained to predict the next word (or token) based on everything that came before, using patterns learned from massive datasets.

Key Takeaway

When people say "AI," they often mean "machine learning" or "generative AI." Rule-based systems are technically AI, but they're not what's disrupting the grants profession. Generative AI is new, flashy, and what most grant professionals interact with today.

Generative AI and Large Language Models (LLMs)

A Large Language Model (LLM) is a type of generative AI trained specifically on text. It's called "large" because it contains billions or even trillions of parameters—think of parameters as knobs the AI system can turn to adjust its behavior based on what it's learned.

The most useful metaphor for understanding LLMs is this: They are autocomplete on steroids.

When you're typing a text message and your phone suggests the next word, that's simple autocomplete based on common word patterns. An LLM does the same thing, but:

How LLMs Generate Text

When you ask an LLM to write something, here's what happens:

  1. The system receives your prompt (your question or instruction)
  2. It breaks down your prompt into tokens (small chunks of text, roughly word-sized)
  3. It processes these tokens through layers of mathematical operations
  4. It generates a probability distribution—essentially, odds for what the next token should be
  5. It "samples" from that distribution to pick the next token (with some randomness built in)
  6. It repeats this process, generating one token at a time, until it decides to stop

This is why LLMs sometimes seem to "know" things—they're recognizing patterns they learned during training. But it's also why they hallucinate (make things up): they're predicting the next token based on statistical patterns, not based on whether something is true.

Real Example: Vocabulary Patterns

During training, an LLM learned that the word "grants" often appears near words like "funding," "nonprofit," "foundation," and "application." So when you ask it to write about grants, it tends to use vocabulary it learned was associated with that topic. It's pattern matching, not real understanding.

Key Vocabulary You Need to Know

Token: A small chunk of text processed by the AI. Roughly 1 token = 0.75 words. An LLM processes text one token at a time.
Training Data: The massive dataset an AI system learned from. GPT-4 was trained on hundreds of billions of tokens from books, websites, and other text.
Parameters: The "knobs and dials" an AI system learned to adjust during training. More parameters usually means more complex pattern recognition, but also more computational requirements.
Context Window: The amount of text an LLM can "see" at once when generating a response. Claude has a 200K token context window; ChatGPT-4 has a 128K context window. This matters for grant work because it determines how much of your existing proposal text the AI can review at once.
Temperature: A setting that controls randomness in AI responses. Higher temperature = more creative but potentially less reliable. Lower temperature = more predictable but sometimes more boring. For grant work, you usually want lower temperature (more consistency).
Inference: The process of running an AI system to generate output. When you ask ChatGPT a question, that's inference. During training, the system learns patterns; during inference, it uses those patterns to generate new content.

Why AI Feels Intelligent (But Isn't, Exactly)

When you use ChatGPT or Claude, the output can feel remarkably intelligent. The system writes in complete sentences, understands context, follows complex instructions, and produces work that would take you hours to create manually.

This creates a psychological trap: we anthropomorphize. We assume the system "understands" what we're asking, that it "knows" about our community, that it's "thinking" about our problem.

The reality is more humble: LLMs are sophisticated pattern-matching systems, not reasoners or understanders. They don't have beliefs, intentions, or knowledge. They have learned statistical patterns about how text relates to other text, and they're very good at generating text that matches those patterns.

This distinction matters enormously for grant professionals. It means:

Apply This: Language Matters

Today, make a note: when you use AI for grant work, avoid thinking of it as a researcher, strategist, or expert consultant. Think of it as a highly skilled writing assistant and brainstorming tool. This mental model will keep you from over-trusting outputs and will help you position yourself as the human expert who makes the final decisions.

The AI Boom: Why Now?

If AI has existed for decades, why is everyone talking about it now? Three reasons converged:

  1. Scale: Recent models are trained on incomparably larger datasets, making their pattern recognition far more sophisticated
  2. Accessibility: Tools like ChatGPT made powerful AI available to regular people at low cost (or free), not just to companies with massive AI budgets
  3. Capability: Modern LLMs can do things previous AI couldn't—write coherent long-form text, answer nuanced questions, engage in multi-step reasoning that feels almost human

Wrapping Up: The Foundation

Here's what you should take away from this lesson:

Continue Your CAGP Journey

You've learned what AI actually is. Next, we'll dive deep into how Large Language Models work under the hood—so you understand not just what they are, but why they sometimes produce astonishing results and why they sometimes hallucinate.

Explore More Lessons