The Deception of the “Voluntary” and the Privatization of Mass Surveillance
Chat Control: GDPR-Europe selling itself as a privacy champion while building its own digital panopticon.
The European Paradox
The same Europe that gifted the world the GDPR — the regulation that forced Google and Meta to rethink surveillance capitalism — is now building the largest mass surveillance infrastructure ever conceived in a democracy. Not with the army. Not with emergency decrees. But with a regulation called “Child Sexual Abuse Regulation” that no one dares to openly criticise, because anyone who opposes it risks being accused of wanting to protect paedophiles.
One leaked email, October 2022. Ylva Johansson, European Commissioner for Home Affairs, writes to Ashton Kutcher, Hollywood actor and founder of Thorn, an organisation that sells content moderation software: “The regulation I propose is a strong European response.” Not a formal thank you. A declared partnership between regulator and the industry that benefits from the regulation.
This is the story of how a universally shared goal — protecting children — became the vehicle to normalise what until yesterday was unthinkable: the permanent, algorithmic, generalised scanning of every private communication in Europe. Not because it works. Not because it’s legal. But because once the principle is accepted, the rest is just a software update.
Chat Control: The Document That Wasn’t Supposed to Leak
The Legal Opinion Nobody Wanted to Hear
March 2023, Council of the European Union. A document classified as “LIMITE” slips through the cracks of confidentiality and lands in the hands of Netzpolitik, a German investigative journalism outlet. The Council Legal Service — the EU’s internal Supreme Court equivalent — issues a devastating opinion on CSAR: “it violates the very essence of the right to private life (Art. 7 EU Charter).”
“Essence” in EU legalese is not rhetoric. It means the non-negotiable core of a fundamental right. The Court of Justice is clear: you can limit privacy for national security, public order, proven emergencies. But you cannot annihilate the very concept of private communication. If every message is scanned before it is encrypted, “private communication” ceases to be a legal category. It becomes “monitored communication”.
How client-side scanning works
Client-side scanning — the technology at the heart of the proposal — works like this: before you hit “send” on WhatsApp or Signal, an algorithm on your smartphone analyses the content, compares it with databases of illegal material, and if it finds a match it reports you to the authorities. Technically, the message is then encrypted end-to-end. But encryption happens after the scan. It’s like putting a letter into a sealed envelope after someone has already read it and photocopied it.
The Council Legal Service — not activists, not hackers, but the most conservative lawyers in the Union — says this is not compatible with European fundamental rights. The opinion is ignored by the Commission.
The 24 Million Network
A joint European media investigation published in 2023 reconstructs the network. Oak Foundation, a Swiss-based philanthropic organisation, invests 24 million dollars since 2019 to lobby on CSAM regulation. Beneficiaries: ECPAT International, WeProtect Global Alliance, Brave Movement, Thorn. The latter, founded by Ashton Kutcher and Demi Moore, develops PhotoDNA and other scanning tools that would be made mandatory — or heavily incentivised — by EU legislation.
It’s a perfect loop: lobby for a law, sell the technology that the law makes necessary. When the security industry writes the security rules, the end product is not protection. It’s profit masquerading as policy.
But the most serious scandal emerges in September 2023. Johansson’s office launches advertising campaigns on Twitter/X targeted at Germany, the Netherlands, Poland — exactly the countries opposing the proposal in Council. The ads use micro-targeting: profiling users by political opinions, religious affiliation, sensitivity to privacy issues. It’s a blatant GDPR violation, committed by the very Commission that is supposed to enforce the GDPR.
When Patrick Breyer, former German Pirate Party MEP and the most tenacious opponent of the proposal, brings the case to Parliament, Johansson is summoned for a hearing. Her response to questions on Thorn, micro-targeting, conflicts of interest? “Think of the children.”
It has become her mantra. Every technical criticism, every legal objection, every empirical dataset showing the inefficacy of automated scanning: “Think of the children.” As if invoking child protection made any proposal immune to scrutiny, regardless of its effectiveness, legality, or consequences.
The Numbers No One Wants to See
Germany, 2024. Authorities receive 99,375 automatic reports of alleged child sexual abuse material from platforms that already scan voluntarily. Investigation outcome: about 50% of the reported content is perfectly legal. Parents sending pictures of their children to the doctor. Teenagers in consensual sexting. Abuse survivors confiding their story to a friend or therapist online.
Ireland, 2022. Police receive 4,192 automatic reports. Real illegal material: 852, 20.3%. Explicit false positives: 471, 11.2%. The rest is content the algorithm classifies as “ambiguous” — legal for human investigators, suspicious for a machine that has no context, doesn’t understand sarcasm, can’t distinguish medical documentation from pornography, doesn’t know whether two teenagers are flirting or an adult is grooming a minor.
Every false positive is an investigation that drains resources from real cases. It’s a family subjected to a search, phones seized, interrogations. It’s an innocent person entered into police databases, with all the consequences that entails for work, travel, reputation. And no one is responsible: the algorithm “flagged”, the platform “complied”, the state “investigated”. The chain of accountability dissolves into opaque code no one can control, no one can question, no one can sue.
Scientists call it the “black box problem.” Lawyers call it the “accountability gap.” The practical result is that you are more likely to be investigated for sending an innocent photo to your doctor than a real predator is to be caught using encryption techniques that algorithms cannot break.
Chat Control: The Deal That Isn’t a Deal
26 November: What Really Happened
26 November 2025. The European Council — the body representing the governments of the 27 Member States — announces it has reached a “historic compromise” on CSAR after three years of deadlock. Mainstream headlines: “EU backtracks on Chat Control.” “Brussels abandons mass surveillance.” “Victory for digital privacy.”
Patrick Breyer posts a thread on X with the tone of an alarm, not a celebration: “The headlines are misleading. Chat Control is not dead, it is just being privatized.” Privatised, not stopped. A distinction 450 million EU citizens would do well to understand.
The deal text removes mandatory “detection orders” — that would have forced platforms like WhatsApp, Signal, Telegram to scan all communications. On paper, scanning becomes “voluntary.” Platforms can implement automatic detection systems, but are not obliged to.
It looks like a victory. It’s a semantic trick.
Article 4: The Backdoor in the Text
Article 4, in the new Danish compromise text: platforms classified as “high risk” must implement “all appropriate risk mitigation measures”. It sounds like reasonable technical wording. It’s a wide-open door.
Three unanswered questions in the text:
Who classifies a platform as “high risk”?
The text doesn’t specify. A national authority? The European Commission? A new EU body?
And what are the criteria? Number of users? Types of content? Presence of end-to-end encryption?
The latter is crucial: according to leaked Council documents, services with E2EE, anonymous communication,
or real-time messaging are considered “high risk” by definition.
If you use encryption to protect your communications, you’re automatically suspect.
What are “appropriate measures”?
The text doesn’t define, limit or exclude. Can it include automated scanning? Nothing forbids it.
Can it include access to metadata? Nothing prevents it. Can it include backdoors in encryption?
The language is vague enough to allow it. It’s a blank cheque signed by legislators
and handed to enforcement authorities.
Who watches the watchers?
Oversight mechanisms are absent. No prior court authorisation, no independent supervisor,
no binding transparency about how “risk mitigation measures” are applied.
Compare this to the GDPR. The GDPR also uses similar wording — “appropriate technical and organizational measures” — but pairs that phrase with 30 pages of recitals that LIMIT interpretation, with specific articles on data subject rights, with mandatory data protection impact assessments, with a system of independent regulators (national DPAs and the European Data Protection Supervisor). The GDPR is a bureaucratic maze, but that bureaucracy is protection. Chat Control is an empty corridor with no doors.
The Precedent: When “Voluntary” Means Mandatory
The European Union has used this trick before. This is not the first time.
AI Act, 2024. The regulation introduces “Voluntary Codes of Practice” for high-risk AI systems. Voluntary, on paper. But in practice? If you don’t sign up, you face stricter regulatory scrutiny, more frequent audits, political hostility from the Commission. Result: the Codes become de facto mandatory. No one imposed them by law. But no one can afford to ignore them.
Digital Services Act, 2022. The DSA creates a system of “Trusted Flaggers” — certified organisations that can report illegal content and whose reports platforms must prioritise. Platforms can join the system voluntarily. But those who don’t? They’re excluded from policy tables, marginalised in dialogue with institutions, treated as uncooperative. Voluntary becomes inevitable.
Disinformation Code, 2022. A Code of Practice on Disinformation that platforms can voluntarily sign to fight fake news. Facebook, Google, Twitter sign. Not because they’re forced, but because the political cost of NOT signing — being accused of enabling disinformation — is unbearable.
The pattern is identical: don’t mandate directly, but make the alternative too expensive. Not power that forbids, but power that makes a single choice “rational”. Not explicit sanctions, but an environment in which only one option is actually viable.
Breyer calls it “coercion by omission”. I call it the neoliberal government of conduct: I don’t tell you what to do, but I structure the environment you operate in so that you “freely” choose what I wanted you to choose.

There is one detail in the deal text that nobody mentions in press conferences, official statements, or “compromise achieved” celebrations.
Article X (final numbering still pending): military and governmental communications are EXEMPT from scanning. Official justification: “to protect classified information and ensure the security of national defence operations.”
Let’s pause. European leaders know perfectly well that client-side scanning compromises communication security. They know it so well that they exempt themselves from their own law. If scanning were truly safe, if it really didn’t create vulnerabilities exploitable by malicious actors, why shouldn’t the military use it? If it were really possible to “scan without breaking encryption” — as Johansson claims — why do the armed forces need an exemption?
The answer is obvious: because client-side scanning creates a backdoor. And backdoors can be exploited by whoever finds them: hackers, foreign spies, organised crime. Client-side scanning turns every European smartphone into a potential surveillance device — not only for European authorities, but for anyone who manages to compromise the system.
Those in power know this. And they protect themselves. Surveillance is for citizens, not for the rulers. The panopticon has a direction: from top to bottom, never the reverse.
| THEM (military/governments) | US (citizens) |
|---|---|
| Full exemption | “Voluntary” scanning |
| Classified info protected | Private life exposed |
| National security | Child safety |
| Communications secure by definition | Communications suspicious by default |
Germany: 42 Days of Invisible Pressure
7 October 2025. Germany announces it will vote against Chat Control. A crucial moment: with 18% of the EU population, Germany can form a “blocking minority” — the mechanism that prevents the Council from approving a law even if a simple majority exists. Stefanie Hubig, German Justice Minister, declares: “Unwarranted chat monitoring must be taboo in a constitutional state.”
14 October. The planned Council vote is cancelled. The required qualified majority is not there.
31 October. Denmark, holding the rotating Council presidency, presents a new “compromise” text. It removes the word “mandatory”. It leaves everything else: Article 4, age verification, risk categories, vague language.
13 November. Germany votes YES.
What happened in those 42 days? We don’t know. Negotiations between governments in Council are confidential. No public minutes, no transcripts, no recordings. We only know something changed. Bilateral pressure from other Member States? Trade-offs on unrelated files (energy, migration, structural funds)? Threats of diplomatic isolation?
The FDP — the liberal party in the German government, historically the strongest defender of digital privacy, the same that blocked the Data Retention Directive in 2015, the one that promised “no mass surveillance in Germany” — remains mysteriously silent. No public statement, no explanation for the U-turn.
Lesson: if Germany, with one of the strongest constitutional courts in Europe, with its post-Stasi sensitivity to mass surveillance, with its economic and political weight — if Germany folds, which other country can resist?
The Coming Future
Chat Control and Age Verification: ID to Chat
Article 6 of the final deal: minors under 16 cannot use instant messaging services without verifying their identity. The list includes WhatsApp, Telegram, Instagram Direct, Snapchat, Discord, any platform that allows direct user-to-user communication.
Verification method? The text proposes two options: uploading a government-issued ID (passport, ID card) or biometric facial scanning via AI.
Concrete scenario. You’re a 15-year-old Italian teenager. You want to send a message to a friend on WhatsApp. Before you can, you have to give your ID to Meta. Or you have to show your face to a camera that cross-checks it with government databases to verify your stated age. You want to join a Telegram group to organise a student strike? You must identify yourself first. You want to confide in someone online about a personal problem? You must declare who you are to the state — or to a corporation running the age-verification system on behalf of the state — before you can do it.
This is not prison. You’re not locked in. You’re free to communicate — but every communication requires prior identification. Not discipline (control in closed spaces like prisons or factories), but continuous control in open space. Not prohibition, but modulated access. It’s the shift from a society that tells you “you can’t” to a society that tells you “you can, but only if you tell us who you are.”
The United Kingdom has already implemented something similar in the Online Safety Act. Result documented after one year: teenagers use VPNs, register accounts with fake identities, borrow their parents’ devices. Predators do the same. Real victims lose the only space where they could ask for help anonymously.
Czech Prime Minister Petr Fiala called it “pedagogical nonsense” — a pedagogical nonsense that pretends to protect minors by excluding them digitally, while those who really want to harm them find ways around the system in five minutes.
But maybe minors aren’t the real target. Maybe everyone else is. Because once the age-verification infrastructure exists — biometric databases, facial matching systems, mandatory identification procedures — extending it from 16-year-olds to 18-year-olds, to 21-year-olds, to everyone is a simple policy update. No need to rebuild anything. Just change a number in a line of code.
And suddenly no one can communicate anonymously in Europe.
ProtectEU 2030: Chat Control Is Just the First Tile
Chat Control is not a standalone regulation. It’s the first tile of a broader strategy called ProtectEU, published by the European Commission in 2023 and largely ignored by the media. Declared goal: to give European law enforcement the ability to gain “lawful access to encrypted data” by 2030.
The roadmap foresees intermediate steps:
- 2027: Universal implementation of client-side scanning on all connected devices
- 2028: “Lawful access backdoors” in end-to-end encryption protocols
- 2030: “Biometric identity layer” for access to internet services
This isn’t a conspiracy theory. It’s official policy, documented, publicly accessible. It’s just that nobody reads it, because it’s buried in 200 pages of technocratic jargon and sold as a “post-COVID internal security strategy.”
Once the scanning infrastructure exists, modifying it to look for different content is technically trivial:
TODAY: Child sexual abuse material
TOMORROW: Terrorist content (who defines “terrorism”?)
THE DAY AFTER: Hate speech (who defines “hate”?)
2028: Copyright violations (LaLiga, the Spanish football league,
has already tried using smartphone GPS and microphones to detect illegal streams)
2030: “Misinformation” (who defines “false information”?)
No need to rebuild the infrastructure. You just update the list of what to search for. It’s algorithmic governance: no longer general norms applied to specific cases, but predictive profiles applied to populations. No longer “you committed offence X, you’re punished,” but “your statistical behaviour resembles that of people who commit offence X, you’re monitored.”
We’ve seen this film before. 2006: the European Union adopts the Data Retention Directive “to fight terrorism.” Telcos and ISPs are obliged to retain metadata of ALL communications for 6–24 months. Who called whom, when, for how long, from where. Not the content, just the metadata. “It’s not surveillance,” they said, “it’s just technical data.”
2014: the Court of Justice of the European Union declares the Directive illegal. A massive violation of fundamental rights. But in the meantime? Eight years of generalised retention. Eight years in which European police and intelligence services had access to a database of the social relations of 500 million people.
Chat Control follows the same trajectory: years of scanning before a court stops it, if it ever stops it. And in the meantime, the infrastructure is built, normalised, made indispensable. And when the court says “stop,” the answer is: “but now everything depends on this system. We can’t dismantle it. Let’s make small adjustments to make it ‘legal’.”
It’s the technique of incremental normalisation. You don’t ask for everything at once. You ask for a piece, normalise it, then ask for the next piece. In ten years you’ve built a panopticon, one piece at a time, and at each step you’ve said “it’s just a small compromise, it’s reasonable, it’s for safety.”
The Voices You Never Hear
ECLAG — European Child Sexual Abuse Legislation Advocacy Group — represents over 50 child-rights organisations. It supports CSAR. Argument: without automated scanning, predators operate with impunity on encrypted platforms.
But there are child-protection organisations against the proposal. Volt Europa, a pan-European political party, documents: “The Commission’s plans have faced opposition from child protection organisations and abuse survivors.” EDRi (European Digital Rights) collects “69 opposing voices” including “child protection experts” as well as technologists, lawyers, academics.
Why do you never hear them? Because the media frame is binary: either you’re for Chat Control (thus for children), or you’re against (thus “on the side of paedophiles”). There’s no room for middle positions, methodological criticism, or alternatives.
The opposing organisations argue:
- Automated scanning clogs investigations. With false-positive rates of 50–80%, police drown in useless reports. Every hour spent checking that a child-in-bathtub photo is not pornography is an hour stolen from targeted investigations on real abuse networks.
- Algorithms criminalise victims. Teenagers sharing their own photos in consensual contexts are flagged. Abuse survivors documenting what happened to share with therapists or authorities are flagged. The system doesn’t distinguish between perpetrators and victims.
- Alternatives exist and are ignored. Immediate removal of publicly accessible CSAM. Increased resources for targeted investigations. “Safety by design” obligations for platforms — systems designed to make abuse harder without surveilling everyone. Massive investment in victim support — survivors need psychologists, social workers, safe shelters, not algorithms.
Patrick Breyer is working with a survivor of child sexual abuse in a legal challenge against Chat Control 1.0 (the temporary regulation that allows voluntary scanning). Her testimony: generalised scanning would NOT have protected her. Her abuser didn’t use public platforms. He used offline devices, physical exchange, psychological coercion. The algorithm is useless against that.
What would have helped her: an attentive school system, trained social workers, the possibility of confiding in someone without fear. The trauma is now increased by the awareness that every private communication she sends can be scanned, analysed, flagged.
But that narrative doesn’t serve Johansson’s rhetoric. So you never hear it.
3 April 2026: The Deadline for Chat Control
The temporary regulation — the one that allows platforms to scan voluntarily, born as an emergency measure in 2021 — expires on 3 April 2026. Either it is replaced by a permanent regulation (CSAR), or it’s extended again, or it lapses entirely.
The Council has its position: the 26 November text, with permanent “voluntary” scanning, age verification, vague Article 4, military exemptions.
The European Parliament has its position, adopted in November 2023: NO to indiscriminate scanning, YES only to targeted surveillance of specific individuals with individualised court warrants, absolute protection of end-to-end encryption, NO to generalised age verification.
These positions are irreconcilable. The trilogue — closed-door negotiations between Council, Parliament and Commission — must produce a compromise text both institutions can approve. Poland takes over the rotating Council presidency on 1 January 2026. Poland has always opposed Chat Control. But a presidency doesn’t just represent itself — it represents the entire Council. Which text will Warsaw push?
Signal, Proton, Threema — encrypted messaging services — have publicly declared: if mandatory or de facto mandatory scanning is adopted, they will leave the European market. This is not a bluff. It is impossible for an end-to-end encrypted service to implement client-side scanning and remain truly end-to-end encrypted. It’s a logical contradiction. You can have surveillance or you can have privacy. Not both.
Possible outcome: Europeans lose access to secure communications precisely while a surveillance infrastructure is being built that makes those communications more necessary than ever.
The Price of Acceptance
The real problem is not Chat Control. It’s that Chat Control is no longer a scandal.
Fifteen years ago, Edward Snowden’s revelations on PRISM, XKeyscore, and mass NSA surveillance shook Europe. Millions of people in the streets. The European Parliament in revolt. Generalised surveillance of citizens was considered incompatible with democratic values, the rule of law, the Charter of Fundamental Rights.
Today, Europe is building its own version of PRISM — more sophisticated, more legal, more total — and calls it “child protection.” And most people don’t get outraged. They don’t take to the streets. They don’t call their MEPs. Because who can oppose child protection?
No one is defending paedophiles. That has to be stated clearly: child sexual abuse is one of the gravest, most devastating, most intolerable crimes imaginable. Every reasonable resource should be invested to prevent it, investigate it, punish perpetrators, support victims.
But defending the idea that there must exist spaces of communication not mediated by the state, not scanned by algorithms, not identified by biometric databases — that, suddenly, sounds like a luxury we can no longer afford. It has become radical to argue that privacy is not a privilege you earn by proving your innocence, but a right that must be protected for everyone, especially for those who have nothing to hide.
Philosopher Michel Foucault described the panopticon — the ideal prison where inmates don’t know whether they are being watched at any given moment, but know they can be watched at any moment. The result: they internalise control. They behave as if always watched, even when they are not. The most effective power is not the one that explicitly forbids you from doing something, but the one that makes you self-censor because you know someone might be watching.
Chat Control is this. Not a law that forbids you to communicate, but a system that makes it impossible to communicate without the awareness that every word, every image, every shared thought can be scanned, analysed, stored, used against you. The difference between “you can’t speak” and “you can speak but we’ll be listening” is subtle. The consequences are total.
A journalist no longer receives leaks from whistleblowers because the whistleblower knows the message could be scanned and flagged before it even arrives. A domestic violence victim doesn’t seek help online for fear that an abusive partner, if they have access to the device, might see she tried to confide in someone. A queer teenager doesn’t explore their identity in chats with peers because they know parents might be notified. An activist doesn’t organise a protest on Telegram because they know they must identify themselves to do so.
No mass arrests are needed. No overt censorship is needed. Only the awareness of the possibility of being monitored. The panopticon works even when the cells are empty.
The question is not whether we will stop Chat Control. The proposal has been blocked seven times since 2022, and each time it came back with a different name, a different packaging, a slightly tweaked compromise. It will come back again. Lobbying continues, the 24 million dollars weren’t spent for nothing, the security industry has invested too much to give up.
The question is: when will we stop calling “protection” what is surveillance? When will we recognise that “voluntary under threat” is not consent but coercion? When will we accept that a Europe that scans all private communications in search of criminals has stopped being the open society it claims to defend?
3 April 2026 is the legislative deadline. But the real deadline is cultural, epistemological, political. It’s the moment we accept — or refuse — the idea that safety requires the end of privacy. That protecting some requires surveilling all. That innocence must be continuously, algorithmically, in real time proven.
The deadline has no date. It’s now.
That deadline has no date. It’s now. It’s every time we accept a “small reasonable compromise” that a year later becomes the new baseline from which the next compromise is demanded.
Breyer was right. Chat Control is not dead. It has been privatised, normalised, made inevitable. The question is not whether it will be implemented. It’s whether, when it is, we will still remember that we once had a choice.
Sources & further reading: Chat Control
- EDRi – Is this the most criticised draft EU law of all time?
- Patrick Breyer – Reality check: EU Council Chat Control vote is not a retreat…
- EU Commission – Roadmap for effective and lawful access to data for law enforcement
- EUCRIM – Commission presents ProtectEU: the new EU Internal Security Strategy
- TechRadar – The EU wants to decrypt your private data by 2030
- Patrick Breyer – Chat Control evaluation report: Commission fails to show effectiveness
- Greens/EFA – Trustworthy Age Assurance? (study on age verification & fundamental rights)
- Tuta – Chat Control is back & we’ve got only a few weeks to stop the EU CSAM scanning plans








