Redazione RHC : 25 July 2025 11:46
Artificial intelligence-generated fakes have now crossed the threshold between reality and fantasy. Increasingly credible and insidious, these video and audio fakes are becoming increasingly dangerous. Their potential for harm is clear: from politically motivated smear campaigns to celebrity impersonations, not to mention scams targeting businesses and individuals, these technologies can undermine the credibility of the information we perceive online.
Danish lawmakers have taken an important step by introducing legislation that directly addresses this growing threat. The recent legislation grants individuals legal ownership rights over their own faces and voices, outlawing the creation of harmful deepfakes. This measure marks an important step forward, setting a model for other European Union countries and highlighting the essential need for a unified legal strategy to combat the misuse of intelligence. Artificial.
Anna Collard, Senior Vice President of Content Strategy and Evangelist at KnowBe4, said: “Danish lawmakers are leading by example: legal protection against deepfakes is urgently needed. By giving people back ownership of their own faces and voices, Denmark is taking an important step in the fight against AI abuse. But legislation alone is not enough. People must learn to recognize the signs of deepfakes. In addition to governments, educational institutions and tech companies must also invest in digital resilience.”
The nature of the problem is highly individual and concerns personal identity. Technologies like deepfake are now able to credibly replicate distinctive features like your voice and face, which are as unique as fingerprints.
In the absence of adequate security measures, the risk is that anyone can be the target of digital impersonations for fraudulent purposes, for example through fake phone calls attributed to the CEO to defraud companies, or by spreading false political messages. Danish law recognizes this risk by treating your voice and facial appearance as personal property, legally protected from exploitation. This is an indispensable legal limit in a rapidly evolving landscape, where AI-generated content is becoming nearly indistinguishable from authentic recordings.
According to Collard, legislation, while crucial, is only part of the solution. Indeed, technologies to detect fakes are still developing, and not everyone has the resources or skills to recognize counterfeit content. Consequently, as deepfakes become more accessible and credible, the public will have to bear the burden of distinguishing between fact and fiction.
This is why education is as crucial as regulation.Awareness campaigns, digital literacy programs in schools, and workplace training sessions all play a key role in building resilience, he continued. “It’s about teaching people to recognize narrative deception and developing the ability to recognize manipulation when it occurs.”
Ultimately, the fight against deepfakes (like the fight against phishing, which has been largely lost) will require a multifaceted approach. Governments must define legal frameworks, tech companies must develop more effective detection tools and prioritize the responsible development of artificial intelligence, and citizens must be empowered to navigate an online world where seeing no longer necessarily means believing.
Denmark has made a bold first move. Now let’s see if Europe will follow suit, complementing legal protections with the tools, training, and awareness needed to defend against this digital deception.