Understanding AI Privacy Risks
When you type something into ChatGPT, Claude, or Google Gemini, where does it go? The answer varies by tool and plan, but understanding these possibilities is crucial for protecting your organization's sensitive information.
For consumer-grade AI (free ChatGPT, free Claude), text you submit may be:
- Stored by the company for training purposes
- Used to improve the AI model
- Potentially reviewed by human contractors
- Subject to that company's terms of service
For enterprise/paid plans, the situation is different—often with commitments not to use your data for training. But the default assumption with free consumer AI tools is that your data is not private in the way your nonprofit's database is private.
This creates specific risks for nonprofit grant professionals:
Key Concept:
Anything you type into consumer AI tools should be considered non-confidential. Never paste anything you wouldn't be comfortable with a third party seeing indefinitely. This protects your organization, your community, your funders, and the people you serve.
What You Should NEVER Put Into AI Tools
Client and Community Member Personal Information
This is the most critical rule. Never paste:
- Names, addresses, or identifying information of clients or community members
- Health information, mental health diagnoses, or behavioral information
- Immigration status, legal history, or other sensitive personal information
- Financial information (income, benefits status, debt)
- Family or household composition details
Even if you anonymize names, detailed case examples or client stories can be re-identified. If you describe "a 23-year-old Latina woman with a 5-year-old son, experiencing homelessness, referred by the city," someone familiar with your service area might identify that person. So even anonymized details about real clients should not go into consumer AI tools.
What's safer: Describe scenarios in general terms ("a young mother experiencing homelessness") without specific identifiers. Or create composite scenarios that don't reflect real individuals.
Organizational Financial Information
Never paste:
- Your nonprofit's detailed budget or financial statements
- Funding amounts from specific funders
- Revenue sources or financial vulnerabilities
- Staff salary ranges or compensation information
- Detailed program costs or cost-per-outcome metrics you're not public about
Even if phrased generally ("We have $500K in revenue from 3 funders"), you're giving competitors and potential funders information you might not want widely known. Someone could scrape this data and use it competitively.
What's safer: Describe budgets in general terms ("We operate with a mid-sized budget" or "We diversify funding across multiple sources") without specific numbers. Reference publicly available information (annual reports, Form 990s) rather than proprietary details.
Funder Relationship Details
Never paste:
- Private communications with program officers
- Feedback from funders, especially negative feedback or concerns they raised
- Details about why a grant was declined or specific funder concerns
- Funder-specific strategy or targeting information
A funder could discover you shared their confidential feedback. This damages trust. Moreover, sharing how you're targeting specific funders could affect your strategy competitively.
What's safer: Ask "What should our organizational priorities be?" or "How should we develop a fundraising strategy?" without referencing specific funder conversations. Work at the strategic level without disclosing confidential funder interactions.
Proprietary Program Information
Never paste:
- Your proprietary program model, curriculum, or approach if it's a competitive advantage
- Detailed implementation steps or internal processes
- Unpublished research, evaluation data, or outcome findings
- Trade secrets or intellectual property your organization claims
If you've developed a unique program model and consider it proprietary, don't feed it to AI training data. Competitors could access it. Evaluation results that aren't yet published might be scooped.
What's safer: Use AI tools at the strategy level ("How can we improve program effectiveness in youth mentoring?") without disclosing your specific model details. If your program is published or public, it's already disclosed.
Staff and Board Member Information
Never paste:
- Names, positions, or personal information about staff (especially vulnerable staff)
- Personal emails or contact information
- Information about staff personal circumstances or hardships
- Board member personal information
Staff and board members deserve privacy. Information about them shouldn't be in AI training data. If you describe staff, use roles without names ("Our Education Director," not "Sarah Johnson, our Education Director").
What's safer: Refer to staff by role only. Say "Our team includes..." without specific names or personal details.
Government or Regulatory Information
Never paste:
- Specific details from government audits or compliance reviews
- Regulatory feedback or concerns raised by licensing bodies
- Information about government funding applications with restrictions
Government communications often contain confidential details or compliance concerns not meant for public distribution. Keep these private.
Critical Warning:
If you work with vulnerable populations (children, elderly, people with disabilities, incarcerated individuals, people in protective custody), be especially cautious. Never paste details that could identify vulnerable individuals or expose them to risk. The privacy bar is higher.
Understanding Different Tool Policies
Consumer Plans vs. Enterprise Plans
Here's a basic breakdown of how different AI tools handle your data:
ChatGPT (OpenAI)
- Free Plan: Your input may be used for training and improvement. OpenAI may review conversations.
- ChatGPT Plus/Pro: Your data is not used for training (though OpenAI may review for safety). Better for sensitive content.
- Enterprise Plan: Explicit commitment not to use data for training. Most secure option.
Claude (Anthropic)
- Free Plan: Your input is not used for training, but Anthropic may review for safety/improvement.
- Claude Pro: Input not used for training. Anthropic has stricter privacy policies than some competitors.
- Enterprise Plan: No data retention after conversation. Most secure option.
Google Gemini
- Free Plan: Input may be used for training and improvement.
- Gemini Advanced: Input not used for training, but retained for safety review.
- Enterprise Plan: No data retention. Most secure option.
Key takeaway: If you use free consumer plans, assume your data is not private. If you use paid plans, policies are better, but they're still not locked down like a nonprofit's internal database. Enterprise plans offer the strongest privacy commitments.
Practical Rule:
If your nonprofit has a budget for AI tools, investing in at least ChatGPT Plus or Claude Pro (or an enterprise plan if you have substantial AI use) is worthwhile for privacy protection. The cost is low compared to the risk of exposing sensitive information.
Specific Privacy Questions for Each Tool
Before you use any AI tool for grant-related work, ask these questions:
10-Question Privacy Checklist
- Does this company use my input for training or improving the model?
- Can humans (contractors, employees) read my input?
- How long is my data retained?
- Can I delete my conversations?
- Does the company have a privacy policy specific to nonprofits or education?
- What security measures protect my data in transit and at rest?
- Does the company comply with HIPAA (if health-related) or FERPA (if education-related)?
- Can government or law enforcement access my data?
- Is there an enterprise plan with stronger privacy commitments?
- Does this tool work within my organization's IT security requirements?
What You CAN Put Into AI Tools
To balance caution with practicality, here's what IS generally safe for consumer AI tools:
General community context: "Our neighborhood is economically distressed with 35% poverty rates (citing public Census data)" — using publicly available information is fine.
Program-level descriptions: "We provide youth mentoring for ages 12-17" — general program descriptions are fine if they don't reveal proprietary methods.
Strategy questions: "How should a mentoring nonprofit structure evaluation?" — general strategy questions are fine.
Public information: Information from your organization's website, annual report, or published materials is already public, so safer to use.
Hypothetical scenarios: "Imagine a nonprofit that..." — hypothetical questions are safer than questions about your actual organization.
Published research and best practices: "What does research say about mentor training?" — asking about published knowledge is fine.
Specific client names or stories: Never.
Your budget or financial data: Never (except publicly available Form 990 info).
Private funder communications: Never.
Staff or board member personal information: Never.
Proprietary program details: Never, unless already published.
Government compliance information: Never, unless public record.
Creating an Organizational AI Privacy Policy
Strong organizations develop policies governing AI use. Here's what to include:
- Approved tools: List which AI tools your organization approves and for what purposes.
- Data classification: Define what data is "public" (safe for AI tools), "internal" (handle carefully), and "confidential" (never use with consumer AI).
- Review process: Require someone (grant manager, Executive Director) to review before pasting into AI tools.
- Training: Ensure all staff know what data is safe and what isn't.
- Breach procedures: Establish what to do if someone accidentally shares confidential data with an AI tool.
- Tool requirements: Specify that your nonprofit uses paid plans with stronger privacy, or enterprise plans if handling sensitive data.
Even a simple policy (one page) is better than no policy. It shows staff take data privacy seriously and sets clear expectations.
Core Takeaway:
AI tools can make you more efficient, but not at the cost of your organization's data security or your community's privacy. Before you paste anything into an AI tool, ask: "Would I be comfortable if this information were public?" If the answer is no, don't paste it.
Looking Ahead
You now understand three major AI risks: hallucination, bias, and privacy. The next lessons focus on what funders think about AI, how to disclose your AI use, and how to develop your personal AI ethics commitment. Privacy protection is foundational to responsible AI use.