EU ‘chat control’: Between child protection and mass digital surveillance

In May 2022, the European Commission presented a wide-ranging legislative proposal aimed at improving the protection of children on the internet. The then EU Commissioner for Home Affairs, Ylva Johansson, presented the draft regulation on preventing and combating child sexual abuse. At the heart of the proposal is ‘chat control’ – a tool that could oblige online communication service providers to automatically scan private messages and chats for indications of sexualised violence against children.

The Commission justifies the plan with the sharp rise in the number of depictions of abuse on the internet and the aim of identifying perpetrators at an early stage and better protecting children. However, the initiative has divided politicians and the public: while supporters see it as an important step towards better protection of minors, critics warn of a massive invasion of privacy and possible mass surveillance.

On October 8th, 2025, the EU Member States once again failed to reach agreement on the controversial “chat control” measure aimed at combating child sexual abuse material, as a compromise proposal put forward by the Danish Council Presidency did not secure sufficient support. Germany actively blocked the proposal, with Federal Justice Minister Stefanie Hubig stating that surveillance of private communications without cause would be unacceptable in a rule-of-law state. The matter, however, is not definitively closed: forthcoming Council presidencies may bring a revised proposal back for discussion. A renewed vote at the European level is scheduled for 6th and 7th December 2025. Should a revised compromise be tabled by that time and Germany alter its position, the regulation could still be adopted. Prior to its entry into force, however, it would need to undergo the so-called trilogue procedure between the European Commission, the Council of the EU, and the European Parliament

Purpose and scope of the regulation

The purpose of the draft regulation is to combat the dissemination of depictions of sexual violence against children and to protect children from abuse on the internet. 

To this end, communication service providers would be required to automatically scan messages, emails, and cloud content for possible depictions of abuse or grooming attempts. Under the current proposal, authorities could issue so-called ‘detection orders’ obliging service providers to redesign their systems technically in such a way that suspicious content is detected and reported – even without concrete suspicion. According to Article 7 of the draft regulation, the mere possibility that child pornography could be distributed via the communication service in question is sufficient grounds for such an order. Since abusive content can theoretically be distributed via any communication service, virtually all users would be affected. Particularly controversial is the fact that services with end-to-end encryption – such as WhatsApp, Signal, Threema, etc. – are also to be included. In order to be able to check encrypted messages, they would have to be analysed on the device itself before encryption. This so-called ‘client-side scanning’ would effectively undermine the security of encrypted communication.

In addition, the regulation obliges providers of apps, websites and social networks that can be used to contact children to introduce age verification in accordance with Art. 4 III, Art. 6 I lit. c. This is intended to identify underage users and better protect them through risk mitigation measures yet to be determined. In future, users would have to prove their age – for example, by means of identity documents or biometric procedures. 

Technical feasibility and risks

Whether the proposed chat control is technically feasible at all appears highly doubtful. The project directly contradicts the fundamentals of secure digital communication. End-to-end encryption guarantees that only the sender and recipient can read a message – not even the provider itself has access to the content. However, if an automated detection system were to be activated on the user’s device, this protective mechanism would inevitably be undermined. Such a solution would amount to a profound intrusion into all end devices – effectively permanent surveillance software.

Added to this is the question of technical reliability. Even a very low error rate would have serious consequences: with only one per cent of false detections, there would be around 28 million false alarms per day, given that around 2.8 billion messages are sent daily in Germany. Each of these messages would have to be checked – which would require enormous personnel resources and the short-term storage of this communication content. Even with minimal time spent per case, such a review would be practically impossible to carry out.

The planned age verification also raises data protection issues. In order to reliably identify and protect minors, communication service providers and app stores would have to permanently store age data and link it to the online behaviour of their users. This would require comprehensive recording of visited websites and services used – thus creating a new form of digital surveillance.

Last but not least, such an infrastructure poses considerable IT security risks. Every technical backdoor that is created inevitably weakens the overall security level of digital communication. What was originally intended for protection could itself become a gateway – for cybercriminals, state actors or others who specifically exploit security gaps. This would create a system that not only enables surveillance, but also undermines trust in digital services and encryption technologies in the long term.

From a legal standpoint as well, the proposed regulation rests on precarious ground. It would entail profound interferences with fundamental rights enshrined in the Charter of Fundamental Rights of the European Union (CFR) – in particular the right to respect for private and family life and communications (Article 7 CFR), the right to the protection of personal data (Article 8 CFR), and the freedom of expression and information (Article 11 CFR).

At the core of the draft lies the so-called “detection order”, which would empower national authorities to oblige providers of digital services – such as messaging platforms, online intermediaries, or cloud services – to actively scan user content where there is a suspicion that such services are being used to disseminate child sexual abuse material. The request for such a measure is to be made by the coordinating authority of the Member State in which the respective provider is established. The decision to issue a detection order would then rest with a judicial authority or an independent administrative authority.

While the draft stipulates that a detection order may only be issued on the basis of specific evidence of a substantial risk and following a comprehensive proportionality assessment, this safeguard appears limited in practice. The assessment is to take into account, inter alia, the provider’s own risk evaluation, its technical and organisational capacities, as well as the opinions of the competent data protection authorities and of the proposed EU Centre to Prevent and Counter Child Sexual Abuse. Should a national authority deviate from the opinion of this EU Centre, it is obliged to provide a reasoned explanation to both the European Commission and the Centre itself.

Despite these formal procedural safeguards, the interference with private communications would ultimately remain a matter of administrative discretion, likely to be exercised in favour of public security considerations. However, a generalised and indiscriminate surveillance of all users would hardly be reconcilable with the principle of proportionality. Even if the objective – the protection of children – is legitimate, any interference with fundamental rights must be suitable, necessary, and proportionate. A surveillance regime that affects millions of law-abiding citizens could scarcely meet these requirements.

Furthermore, the proposal raises the spectre of function creep. Once established, surveillance infrastructures can rarely be confined to their original purpose. What is introduced today to combat child sexual abuse could tomorrow be invoked in the context of counter-terrorism, intellectual property enforcement, or even political monitoring. As noted above, what was initially conceived as a protective measure may itself become a gateway to misuse. The logic of prevention could easily evolve into one of sanction, turning proactive control into punitive surveillance. The proposed regulation would lay down precisely the technical and legal foundations for such a shift – foundations that, once in place, would be difficult to dismantle.

The rule-of-law framework of the draft also gives rise to concern. Detection orders are conceived as preventive measures which, under Article 7 of the proposal, may be requested by a national coordinating authority and authorised by a judicial or administrative authority – without a mandatory judicial warrant. This effectively shifts the power to authorise interferences with fundamental rights from the judiciary to the administrative level, a development that runs counter to established rule-of-law standards.

In addition, the envisaged monitoring of users’ online activity for the purpose of age verification would transform internet service providers into surveillance entities of their own customers – an approach hardly compatible with the right to privacy. Private companies would, in effect, assume a quasi-policing function, being compelled to participate actively in the monitoring of communications.

Societal Implications and Alternatives

Such a system would entail profound societal consequences. If citizens were to assume that every message is being read or analysed, their patterns of communication would inevitably change. This “chilling effect” would particularly affect those who rely on confidentiality, including journalists, lawyers, medical professionals, and whistleblowers. The proposed age verification mechanisms would compel users to disclose personal or biometric data, effectively to every communication platform. Individuals lacking official identification, or those concerned about data protection, could thus be effectively excluded from the digital sphere.

Furthermore, the monitoring of online activity could incentivize users to migrate to the dark web or to access the internet via jurisdictions not subject to such surveillance. The combination of age verification, online activity monitoring, and communication analysis would provide the basis for comprehensive profiling – encompassing interests, social contacts, and sexual preferences – effectively amounting to the reading of every citizen’s digital diary.

In response, data protection organisations and constitutional law scholars have called for a fundamental shift in approach. Rather than investing in indiscriminate surveillance, the focus should be on targeted investigative measures, expedited removal of known abusive content, enhanced international cooperation, and the provision of preventive and support services for victims.

Conclusion

The debate surrounding the EU chat control illustrates the profound difficulty of balancing child protection with fundamental rights and freedoms. Effective protection of children must not come at the expense of the liberty and privacy of all citizens. The draft regulation on chat control exemplifies the tension between digital child protection and the foundational principles of a free society. While the goal of safeguarding children is indisputably legitimate and necessary, the chosen approach would constitute a slippery slope towards digital mass surveillance. The proposal is neither technically nor legally convincing – it threatens the security of encrypted communications, undermines fundamental rights, and creates precedents with implications far beyond the original objective.

A meaningful regulatory framework must ensure the protection of children without compromising the core tenets of privacy and the rule of law. Rather than relying on surveillance without cause and technically questionable control mechanisms, policy should focus on targeted investigations, digital education, and the strengthening of parental and caregiver responsibility. Effective prevention does not begin on the servers of major platforms, but within the family unit, with those who accompany, educate, and protect children in their daily lives.