Sandro Sana : 16 September 2025 14:39
The news is simple, the technology is not. Chat Control (CSAR) was created to uncover CSAM and grooming dynamics within messaging platforms. The “modernized” version forgoes the server-side backdoor and shifts the focus to the user’s device: client-side scanning before the content is end-to-end encrypted.
This is where the two levels of the story come into play: on the one hand, investigative capacity and dismantling illegal channels; on the other, erosion of confidentiality and a control infrastructure ready to be expanded. Wired Italia has listed the hot spots, we’re going under the hood.
The typical operating model involves three stages, all on-device: a) comparison of the content with perceptual hashes of already known material; b) ML inference to flag new content (never-before-seen images/videos) with CSAM-compatible features; c) NLP to detect linguistic patterns attributable to grooming. The key is the use of perceptual hashes (e.g., PDQ/TMK) capable of “recognizing” a photo even if scaled, compressed, or slightly cropped: technically powerful, but never statistically infallible.
When we move from deterministic matching on known material to the “new suspect” through probabilistic models, we enter a domain where false positives and bias become operating costs. Independent literature on client-side scanning defines it as an additional attack surface: models, lists, and scanners become sensitive code distributed across billions of endpoints, therefore extractable, manipulable, and reverse-engineered.
The Commission proposal (COM(2022) 209) establishes risk assessments for providers, the ability to issue targeted screening orders, reporting obligations to an EU Centre, and cooperation with national authorities. In other words, the “technical” layer is encapsulated in a procedural chain with defined roles and responsibilities; the “political-regulatory” layer decides when and how far to push the scan. Non-compliance results in significant sanctions.
EU regulators (EDPB/EDPS), however, warn of a risk of de facto widespread scanning: if the order becomes broad, opaque, or repeated, the threshold from a targeted measure to mass screening is short. The criticism is clear regarding proportionality, compatibility with the Charter, and technical ineffectiveness in detecting “new CSAM” without cascading errors.
September-October 2025 are decisive junctures at the EU Council: some states (including Germany and Luxembourg) have formally expressed opposition to mandatory scanning, others are pushing the “Danish compromise” line. Even with an agreement in the Council, we would then go to a trilogue with a Parliament much more skeptical about CSS. For security teams, this means a practical thing: architectures and roadmaps may need to consider opposite scenarios within a few weeks.
On a strictly operational level, on-device scanning can shorten the latency between the appearance of illicit content and qualified reporting, especially on known content thanks to perceptual hashes robust to common transformations. Standardizing the submission of indicators and technical metadata to the EU Centre can improve de-duplication, case prioritization, and cooperative cross-jurisdiction takedown. The result: faster disruption of closed groups and channels that currently thrive on the “technical time” of discovery. ar5iv+1
The same pipeline introduces a systemic by-pass at the end-to-end: the content lives “in the clear” on the device in the presence of a privileged scanner. If those artifacts (models, lists, threshold logic) are updated remotely, certified update channels and verifiable integrity are needed; otherwise, we are adding a privileged path within the user perimeter. We’re not just talking about privacy here: it’s security engineering. Academic analyses warn that CSS expands the attack surface (model tampering, evasion, data exfiltration) and shifts trust from the cryptographic protocol to the scanner’s supply chain. Meanwhile, users learn to write differently (self-censorship), and false positives become real social and judicial costs.
Useful historical note: When Apple proposed a hybrid CSAM detection system with on-device matching in 2021, the technical community and advocates raised similar objections; the company froze the rollout. This is a precedent that shows how fragile the balance between protection and surveillance is when control shifts to the client.
The same coin changes face based on four technical and procedural variables: scope, targeting, transparency, auditability.
As long as the scope remains limited to known material (matching on certified perceptual hashes), the detection orders are strictly circumscribed and temporally limited, the transparency includes public metrics on TPR/FPR and model drift, and everything is auditable by third parties with independent control of the database and guarantees of recourse for those reported, we can talk about a protection tool. When the scope shifts to generalized predictive (AI that interprets the text), the duration becomes permanent, the algorithm and the lists are opaque and non-reviewable, and the sanctions push the providers to force the E2EE, the needle crosses the threshold: the measure becomes surveillance infrastructure. The EU bodies themselves have written in black and white that the risk is concrete.
If the goal is to close illegal channels, client-side scanning offers tactical advantages that are hard to deny. But at the architectural level, it introduces a privileged inspection point inside devices, translating a criminal problem into a systemic risk for the confidentiality of communications and the hygiene of the E2EE ecosystem. The difference between tool and surveillance is not semantic: it is engineering + governance. Selectivity, verifiable proportionality, public error metrics, independent audits, and defense rights are the only way to stay on the right side of the coin. Everything else is a slippery slope, and Europe knows it, because its own guarantors have reminded it of it.