Data Privacy — What You Should Never Put Into AI Tools

⏱️ 25 minutes 📊 Lesson 4.4 of 7

Understanding AI Privacy Risks

When you type something into ChatGPT, Claude, or Google Gemini, where does it go? The answer varies by tool and plan, but understanding these possibilities is crucial for protecting your organization's sensitive information.

For consumer-grade AI (free ChatGPT, free Claude), text you submit may be:

For enterprise/paid plans, the situation is different—often with commitments not to use your data for training. But the default assumption with free consumer AI tools is that your data is not private in the way your nonprofit's database is private.

This creates specific risks for nonprofit grant professionals:

Key Concept:

Anything you type into consumer AI tools should be considered non-confidential. Never paste anything you wouldn't be comfortable with a third party seeing indefinitely. This protects your organization, your community, your funders, and the people you serve.

What You Should NEVER Put Into AI Tools

Client and Community Member Personal Information

This is the most critical rule. Never paste:

Even if you anonymize names, detailed case examples or client stories can be re-identified. If you describe "a 23-year-old Latina woman with a 5-year-old son, experiencing homelessness, referred by the city," someone familiar with your service area might identify that person. So even anonymized details about real clients should not go into consumer AI tools.

What's safer: Describe scenarios in general terms ("a young mother experiencing homelessness") without specific identifiers. Or create composite scenarios that don't reflect real individuals.

Organizational Financial Information

Never paste:

Even if phrased generally ("We have $500K in revenue from 3 funders"), you're giving competitors and potential funders information you might not want widely known. Someone could scrape this data and use it competitively.

What's safer: Describe budgets in general terms ("We operate with a mid-sized budget" or "We diversify funding across multiple sources") without specific numbers. Reference publicly available information (annual reports, Form 990s) rather than proprietary details.

Funder Relationship Details

Never paste:

A funder could discover you shared their confidential feedback. This damages trust. Moreover, sharing how you're targeting specific funders could affect your strategy competitively.

What's safer: Ask "What should our organizational priorities be?" or "How should we develop a fundraising strategy?" without referencing specific funder conversations. Work at the strategic level without disclosing confidential funder interactions.

Proprietary Program Information

Never paste:

If you've developed a unique program model and consider it proprietary, don't feed it to AI training data. Competitors could access it. Evaluation results that aren't yet published might be scooped.

What's safer: Use AI tools at the strategy level ("How can we improve program effectiveness in youth mentoring?") without disclosing your specific model details. If your program is published or public, it's already disclosed.

Staff and Board Member Information

Never paste:

Staff and board members deserve privacy. Information about them shouldn't be in AI training data. If you describe staff, use roles without names ("Our Education Director," not "Sarah Johnson, our Education Director").

What's safer: Refer to staff by role only. Say "Our team includes..." without specific names or personal details.

Government or Regulatory Information

Never paste:

Government communications often contain confidential details or compliance concerns not meant for public distribution. Keep these private.

Critical Warning:

If you work with vulnerable populations (children, elderly, people with disabilities, incarcerated individuals, people in protective custody), be especially cautious. Never paste details that could identify vulnerable individuals or expose them to risk. The privacy bar is higher.

Understanding Different Tool Policies

Consumer Plans vs. Enterprise Plans

Here's a basic breakdown of how different AI tools handle your data:

ChatGPT (OpenAI)

Claude (Anthropic)

Google Gemini

Key takeaway: If you use free consumer plans, assume your data is not private. If you use paid plans, policies are better, but they're still not locked down like a nonprofit's internal database. Enterprise plans offer the strongest privacy commitments.

Practical Rule:

If your nonprofit has a budget for AI tools, investing in at least ChatGPT Plus or Claude Pro (or an enterprise plan if you have substantial AI use) is worthwhile for privacy protection. The cost is low compared to the risk of exposing sensitive information.

Specific Privacy Questions for Each Tool

Before you use any AI tool for grant-related work, ask these questions:

10-Question Privacy Checklist
  1. Does this company use my input for training or improving the model?
  2. Can humans (contractors, employees) read my input?
  3. How long is my data retained?
  4. Can I delete my conversations?
  5. Does the company have a privacy policy specific to nonprofits or education?
  6. What security measures protect my data in transit and at rest?
  7. Does the company comply with HIPAA (if health-related) or FERPA (if education-related)?
  8. Can government or law enforcement access my data?
  9. Is there an enterprise plan with stronger privacy commitments?
  10. Does this tool work within my organization's IT security requirements?

What You CAN Put Into AI Tools

To balance caution with practicality, here's what IS generally safe for consumer AI tools:

General community context: "Our neighborhood is economically distressed with 35% poverty rates (citing public Census data)" — using publicly available information is fine.
Program-level descriptions: "We provide youth mentoring for ages 12-17" — general program descriptions are fine if they don't reveal proprietary methods.
Strategy questions: "How should a mentoring nonprofit structure evaluation?" — general strategy questions are fine.
Public information: Information from your organization's website, annual report, or published materials is already public, so safer to use.
Hypothetical scenarios: "Imagine a nonprofit that..." — hypothetical questions are safer than questions about your actual organization.
Published research and best practices: "What does research say about mentor training?" — asking about published knowledge is fine.
Specific client names or stories: Never.
Your budget or financial data: Never (except publicly available Form 990 info).
Private funder communications: Never.
Staff or board member personal information: Never.
Proprietary program details: Never, unless already published.
Government compliance information: Never, unless public record.

Creating an Organizational AI Privacy Policy

Strong organizations develop policies governing AI use. Here's what to include:

  1. Approved tools: List which AI tools your organization approves and for what purposes.
  2. Data classification: Define what data is "public" (safe for AI tools), "internal" (handle carefully), and "confidential" (never use with consumer AI).
  3. Review process: Require someone (grant manager, Executive Director) to review before pasting into AI tools.
  4. Training: Ensure all staff know what data is safe and what isn't.
  5. Breach procedures: Establish what to do if someone accidentally shares confidential data with an AI tool.
  6. Tool requirements: Specify that your nonprofit uses paid plans with stronger privacy, or enterprise plans if handling sensitive data.

Even a simple policy (one page) is better than no policy. It shows staff take data privacy seriously and sets clear expectations.

Core Takeaway:

AI tools can make you more efficient, but not at the cost of your organization's data security or your community's privacy. Before you paste anything into an AI tool, ask: "Would I be comfortable if this information were public?" If the answer is no, don't paste it.

Looking Ahead

You now understand three major AI risks: hallucination, bias, and privacy. The next lessons focus on what funders think about AI, how to disclose your AI use, and how to develop your personal AI ethics commitment. Privacy protection is foundational to responsible AI use.

← Previous Lesson

Understanding Funder Perspectives

Now that you understand the risks, let's explore what funders think about AI and how they're approaching it.

Continue to Lesson 4.5