AI Privacy Checklist: 10 Settings to Change Right Now on Tools You Already Use

AI Privacy Checklist: 10 Settings to Change Right Now on Tools You Already Use

You use ChatGPT to brainstorm. Gemini to draft emails. Copilot to clean up spreadsheets. Maybe Claude for code reviews. These tools are part of the daily workflow now—and every one of them is collecting your data by default.

The problem? The opt-out toggles are buried. Different menus, different labels, different logic. I spent a week auditing five major AI platforms and mapping out exactly where the switches are. Here's the checklist.

Quick Reference: What's On by Default

Before diving in, here's the landscape in 2026:

Tool Training on by default? Opt-out method Data retention after opt-out
ChatGPT (Free/Plus/Pro) Yes Toggle in Settings 30 days
Google Gemini Yes Activity toggle Up to 72 hours
Microsoft Copilot Yes Privacy menu 30 days
Claude (Free/Pro/Max) Yes Privacy toggle 30 days
Meta AI Yes No real opt-out (US) N/A

That table should tell you something: every major platform trains on your conversations unless you manually stop it. Let's fix that.

Settings #1–2: ChatGPT

#1: Turn Off "Improve the Model for Everyone"

Go to Settings → Data Controls → Improve the model for everyone and switch it off. This is the big one. It stops OpenAI from feeding your conversations into future training data.

Your chat history stays intact—you don't lose anything. The toggle only controls whether your inputs train the model.

Heads up: OpenAI added a separate voice mode training toggle in early 2026. If you use ChatGPT's voice features, check that one too—it's in the same Data Controls menu but listed independently.

#2: Use Temporary Chat for Sensitive Stuff

Click the icon in the top-right corner of any new chat to start a Temporary Chat. These conversations don't appear in your history, don't create memories, and are never used for training. OpenAI deletes them from their servers within 30 days.

I use this anytime I'm pasting financial data or drafting something confidential. Think of it as incognito mode for AI.

Settings #3–4: Google Gemini

#3: Turn Off "Keep Activity" (Formerly Gemini Apps Activity)

Google renamed this setting in late 2025, which confused a lot of people. Here's the current path: open Gemini, click the clock-with-arrow icon on the left menu, and hit Turn Off on the activity page.

One catch—turning this off also disables your conversation history. You won't be able to scroll back through old chats. That's the trade-off Google forces on you.

Worth knowing: Even with activity turned off, Google retains your data for up to 72 hours for "stability and abuse detection." Short window, but not zero.

#4: Disable Gmail Smart Features (Two Locations)

This one's sneaky. Gmail's AI-powered features—smart compose, email categorization, even spell-check—are tied to a data-sharing toggle. You need to disable it in two places:

  • Settings → General → Smart features and personalization → turn off
  • Settings → General → Smart features and personalization in other Google products → turn off

On mobile, look under Settings → Data privacy.

The downside? You lose auto-categorization of emails into Primary, Social, and Promotions tabs. Decide if that convenience is worth the data access.

Settings #5–7: Microsoft Copilot

#5: Disable Model Training on Text

In Copilot, go to your profile icon → Settings → Privacy → Model training on text and toggle it off. On mobile: Menu → Profile → Account → Privacy → Model training on text.

Microsoft's opt-out is retroactive within 30 days—meaning they'll remove your recent data from training pipelines. That's better than most services offer.

#6: Disable Model Training on Voice

Same menu, separate toggle. If you use Copilot's voice features, this one matters. It was reportedly enabled by default on some Windows 11 machines (especially in the Gaming Copilot overlay), so check even if you've never consciously turned it on.

#7: Check the "Microsoft Usage Data" Toggle

This is new as of February 2026. Copilot now pulls data from Bing, Edge, and MSN to power its memory and personalization features. The toggle is buried inside Copilot → Settings → Memory → Microsoft usage data. Turn it off, then click "Delete all memory" to clear what's already been collected.

Quick tip: If you use Copilot inside Word, Excel, or other Office apps with a work account (Microsoft 365), your data isn't used for training by default. But verify with your IT department—custom configurations vary.

Settings #8–9: Claude (Anthropic)

#8: Disable "Help Improve Claude"

Navigate to Settings → Privacy and turn off the training toggle. With it on, Anthropic can retain your chats for up to five years. With it off, retention drops to around 30 days.

Anthropic started using consumer conversation data for training in August 2025. If you signed up before that and never checked your settings, this toggle might still be on.

#9: Use Incognito Mode for One-Off Queries

Claude's Incognito chats are never used for training—even if you leave the main privacy toggle enabled. Great for quick, sensitive queries where you don't need the AI to remember context later.

Setting #10: Meta AI—The Tough One

Here's the uncomfortable truth: if you're in the US, there's no clean opt-out for Meta AI training. No toggle. No simple form. Meta uses your public posts, AI chat interactions, and (as of December 2025) even your AI conversations for ad targeting across Facebook, Instagram, Messenger, and WhatsApp.

EU and UK users can submit an objection request through Instagram → Settings → Privacy Centre → AI at Meta → Submit an objection. US users don't have that option.

Your best moves:

  • Set all social profiles to private
  • Avoid interacting with Meta AI in chats entirely
  • On WhatsApp, enable Advanced Chat Privacy per conversation to block AI from reading group chats
  • Check Facebook's Camera Roll Cloud Processing setting (Settings → Camera roll sharing suggestions) and make sure it's off

The Bigger Picture

Changing these settings takes about 15 minutes total. But here's what most guides won't tell you: opting out doesn't erase data already used in training. It only stops future collection. And even after opting out, most platforms keep your data for 30 days (or longer) for safety monitoring and legal compliance.

The real habit shift? Treat AI tools the way you'd treat a group chat with strangers. Don't paste passwords, proprietary code, or sensitive medical details—regardless of your privacy settings. Toggles can change overnight. Your data hygiene shouldn't depend on a company keeping its word.