Your Privacy is a Compromise.
Every platform you use makes you a deal. Most people never read the terms. This guide does it for you.
TLDR: Every platform in this guide has already made privacy decisions on your behalf. Most of those decisions benefit them, not you. This guide names the decision, the honest reward for accepting it, and the honest cost of changing it. 20 platforms. Verified March 2026. Download at the bottom of this piece.
The platforms covered span six categories.
Social: LinkedIn, Facebook, Instagram, TikTok, X.
Creator: Substack, Medium.
AI Tools: ChatGPT, ChatGPT Edu (Institutional), Claude, Gemini, Microsoft Copilot.
Workspace: Google Workspace, Slack, Zoom, Notion.
Research and Academic: ResearchGate, Academia.edu.
Devices: Meta Ray-Ban Smart Glasses, Apple Ecosystem, Voice Assistants (Alexa and Google Home).
The default setting is never chosen with your interests in mind.
It is chosen with the platform’s interests in mind.
That is not a conspiracy. It is a business model. And it has been operating on you, across every device you own, in every app you have open, on the platform you are reading this on, without you reading the terms.
This piece is about what that actually means. Not in the abstract. In the specific. And at the end of it there is a guide, free to download, that covers 20 platforms, every setting worth changing, and the honest cost of every decision.
In September 2024
…two Harvard students built a tool called I-XRAY. They attached it to a pair of Meta Ray-Ban smart glasses and walked around a university campus. The tool combined the glasses’ camera with publicly available facial recognition software. In real time, it identified strangers. Not just their faces. Their home addresses, their phone numbers, and the names of their family members.
The recording indicator light on the glasses is small. Most bystanders never notice it. There is no consent mechanism. The person being identified has no idea it is happening.
In February 2026, a leaked internal Meta memo confirmed the company is building exactly this capability natively into the next version of the glasses. They are calling it “Name Tag.” The memo noted the launch should be timed for when civil society groups are “focused on other concerns.”
That is one end of the spectrum.
Here is the other end.
Since November 2025, LinkedIn has used your posts, your career history, and your profile to train its AI systems and feed Microsoft’s advertising machine. By default. The setting exists. Most users have never seen it. It lives at Settings and Privacy, then Data Privacy, then Data for Generative AI Improvement.
The toggle appears off on the page. But the data processing continues unless you also submit a separate Data Processing Objection form. Two steps. One visible. One not.
Both of those things, the glasses and the toggle, are privacy stories. But they are not the same kind. The glasses are a product decision made without you. The toggle is a transaction you agreed to without reading it.
Most privacy coverage treats these as equivalent failures. They are not. One requires legislation. The other requires a setting change and thirty seconds of your time.
This guide is about the second kind.
The problem with privacy advice is that it usually does one of two things.
It tells you to turn everything off, without telling you what you lose. Or it tells you the platform is terrible, without telling you what to actually do about it.
Neither is useful. Because privacy is not binary.
Turning off LinkedIn’s AI training means your writing stays yours. It also means the AI features that have learned your career voice go generic. That is a real trade. You should know about it before you make it.
Turning off Facebook’s Off-Facebook Activity tracking is one of the highest-impact privacy changes available. It also means every service you accessed via Facebook login now requires a separate account. That is a real cost. You should know about it.
Turning off Gemini’s Smart Features in Gmail stops Google’s AI reading your email. But there are two separate toggles. Most people find one and assume they are done. They are not.
The TikTok question is not really about settings at all. The 2026 privacy policy explicitly added precise GPS location collection. Biometric data is collected. ByteDance is subject to Chinese national security law, which requires cooperation with state intelligence services on request. The settings you can change are real and worth changing. But they do not change what TikTok is.
These are not equivalent risks. A tool that helps you understand the difference between them is more useful than one that treats every platform the same.
That is what this guide attempts to do.
For each of the 20 platforms it covers, it names the default trap: what the platform decided for you without asking. It names the trade: the honest reward for accepting that default, and the honest risk of doing so. And for every setting worth changing, it gives you the verified path, the genuine benefit of leaving it on, and the genuine cost of turning it off.
Every setting path was verified in March 2026.
A few things worth knowing before you read it.
On Facebook, the face recognition setting that appears in many privacy guides has been removed. Meta shut down the feature in November 2021 and deleted over a billion facial recognition templates. Any guide still listing it is out of date.
On Instagram, the end-to-end encrypted DM feature some users opted into is being permanently removed on May 8, 2026. Download your encrypted message history before that date. After May 8, Meta can read all Instagram direct messages.
On X, the Grok AI training opt-out exists only on desktop. It is not available in the mobile app. Millions of users who have only ever used X on their phones have never had access to it.
On Apple, Advanced Data Protection, the setting that end-to-end encrypts your iCloud backups, Photos, and Notes, was removed for UK users in February 2025, following a demand from the UK government under the Investigatory Powers Act. New UK users cannot enable it. UK iCloud data remains accessible to Apple and to UK authorities with a legal warrant. Any guide recommending this setting without that caveat is not written for a UK audience.
On Claude, the AI assistant you may be using to draft strategy, summarise documents, or work through client problems, the September 2025 terms update introduced an opt-in training toggle for Free, Pro, and Max accounts. Opting in extends data retention from 30 days to 5 years. A 60x increase. If you accepted the updated terms without reading them, check Settings, then Privacy, then Help improve Claude.
The goal of this guide is not maximum privacy.
Maximum privacy means leaving every platform, disabling every feature, and accepting every cost. For most people that is not realistic or desirable. LinkedIn reach matters. Gmail Smart Compose saves time. ChatGPT memory makes the tool more useful. These are real benefits and the guide treats them as such.
The goal is informed choice. To know what the deal is before you accept it. To understand what you gain by leaving a setting on, and what you lose by turning it off. To make the decision consciously rather than by default.
Every platform covered in this guide made a choice about where to set the starting point. That choice was not made with your interests in mind. It was made with theirs.
The guide is an attempt to give you the information you need to decide whether their starting point is also yours.
Download the full guide below. It is free.
If you find it useful, the most useful thing you can do is share it with colleagues, with your team, with anyone who uses these platforms and has never thought about what they agreed to.
And if you want more of this: forensic, verified, no flannel. discarded.ai tracks the gap between what AI companies promise and what they actually do. Subscribe free. New pieces when there is something worth saying.
[DOWNLOAD: Your Privacy is a Compromise | discarded.ai | Settings verified March 2026]
Tracking the gap between AI promises and AI reality.


