Paolo Galdieri : 15 September 2025 07:58
In recent months, I’ve repeatedly found myself filing complaints about fake videos circulating online. We’re not just talking about stolen content distributed without consent, but also deepfakes: videos in which famous faces are superimposed over foreign bodies, often used to promote financial investments or inserted into pornographic contexts.
A phenomenon that, unfortunately, is no longer surprising for its presence, but for the speed with which it grows, spreads, and improves.
Those who follow the industry have learned of platforms like Mia moglie or Phica. Platforms where apparent spontaneity often hides a veritable market for other people’s bodies and intimacies. In many cases, videos are uploaded without the consent of the people depicted. Private recordings are stolen, or content shared in a moment of trust suddenly becomes public knowledge.
Dai potere alla tua programmazione con TypeScript funzionaleImpara a scrivere codice modulare, sicuro e scalabile con il nostro corso pratico di Programmazione Funzionale in TypeScript, guidato dall’esperto Pietro Grandi, professionista nello sviluppo del software. In 6 ore e 29 lezioni, esplorerai concetti fondamentali come immutabilità, funzioni pure, higher-order functions e monadi, applicandoli direttamente al mondo reale dello sviluppo software. Il corso è pensato per sviluppatori, team leader e professionisti del software che desiderano migliorare la qualità e la manutenibilità del loro codice. Con esempi pratici, esercizi e la guida esperta di Grandi, acquisirai competenze avanzate per affrontare le sfide moderne dello sviluppo. Approfitta della promo e scrivi subito all'amministrazione e guarda l'anteprima gratuita del corso su academy.redhotcyber.com Contattaci per ulteriori informazioni tramite WhatsApp al 375 593 1011 oppure scrivi a [email protected] ![]()
Se ti piacciono le novità e gli articoli riportati su di Red Hot Cyber, iscriviti immediatamente alla newsletter settimanale per non perdere nessun articolo. La newsletter generalmente viene inviata ai nostri lettori ad inizio settimana, indicativamente di lunedì. |
The next technological leap is represented by deepfakes. If on amateur sites the problem was (and is) the theft of real images, today the bar is raised even further: it’s no longer necessary to steal a file; a photograph is enough to create a video in which the person appears to do or say something they never actually did. It’s the transition from the violation of privacy to the creation of a truly alternative reality.
The impact of these manipulations varies depending on who is the victim.
Famous faces – actors, politicians, influencers – are a privileged target: their public exposure makes it easier to discover and expose fake news, but at the same time amplifies the damage, because it spreads very quickly and on a large scale.
For ordinary faces, however, the The situation is even more insidious. Lacking the same visibility, they also lack the means to defend themselves: they are unlikely to be able to monitor the internet or obtain timely removal of content. In these cases, the deception is often more credible, precisely because there is no public domain “original” to compare the fake with. The consequence is devastating: ordinary people find themselves involved in manipulated sexual videos or false financial promotions, with destructive effects on their private and professional lives.
Faced with this ever-evolving reality, the law often appears to lag behind technology. The Italian legal tradition has already introduced important rules, but deepfakes problematically elude traditional legal categories. The source images may be public, and the manipulated content isn’t limited to pornography but also to financial, political, or health-related misinformation. Yet, the damages to the person involved are comparable—and in some cases even worse—than those already covered.
Therefore, the introduction of dedicated legislation could be a necessary response. The goal is not to multiply criminal charges, but to identify a provision and specific aggravating circumstances related to the use of artificial intelligence to manipulate an individual’s image and voice. Bill No. 1146/2024, which dedicates an article to criminal provisions, takes this approach. The bill introduces a new crime, the “unlawful dissemination of content generated or altered with artificial intelligence systems,” punishable by one to five years’ imprisonment for those who disseminate, without consent, artificially manipulated images, videos, or voices capable of misleading. Furthermore, the bill provides for a series of common and specific aggravating circumstances: fraud, computer fraud, money laundering, self-laundering, market manipulation, and even impersonation can be punished more severely if committed using artificial intelligence tools. This is therefore an attempt to update the Criminal Code, without creating an independent corpus, but by strengthening existing tools when AI becomes a means of criminal activity.
At the European and international level, the debate is already open: just think of the AI Act under discussion in Brussels, which seeks to establish common rules for artificial intelligence systems, including the risks of audiovisual manipulation.
The fight against deepfakes and stolen videos cannot be won with a single tool, but with the synergy of multiple intervention plans. The technological challenge. We need algorithms capable of automatically identifying manipulated content and flagging it before it goes viral. Some universities and research centers are developing digital watermarks and image tracking systems to distinguish authentic from fake. However, it’s a never-ending race: each new detection tool stimulates the emergence of more sophisticated falsification techniques. The challenge is therefore ongoing and requires significant public investment, not left to private interests alone.
The legal challenge. Beyond regulations, effective procedures are crucial. A victim who discovers a deepfake on an international platform cannot wait months to obtain its removal. Emergency channels are needed, similar to those introduced for online terrorist content, which allow authorities to request immediate and binding deletion. At the same time, it is necessary to strengthen international cooperation, because servers are often abroad and those responsible operate in countries with less stringent legislation.
The cultural challenge. This is probably the most decisive game. A society that cannot distinguish truth from falsehood is destined to become fertile ground for manipulation of all kinds, from gossip to political propaganda. We need digital education in schools, adult literacy programs, and institutional campaigns that explain the risks and teach how to recognize manipulated content. Critical awareness is the best antidote to the virality of fakes.
The common thread that links amateur sites like Mia moglie and Phica to the most sophisticated deepfakes is always the same: the non-consensual use of a person’s image and identity. Today, this is no longer just a problem related to pornography or the morbidity of certain contexts, but a question that concerns democracy, the economy, and civil coexistence.
If anyone can create a credible video featuring the face of a politician declaring war, an entrepreneur inviting investment in a scam, or an ordinary person drawn into a pornographic scenario, then trust in reality itself is undermined. It’s no longer just a matter of protecting individual reputations, but of preserving social cohesion and the ability to distinguish what’s real from what’s artificially constructed.
In this sense, combating deepfakes and the dissemination of stolen videos represents a true challenge for civilization. Law, technology, and culture are not enough: we need an alliance that brings them together, involving institutions, platforms, and citizens. Not only the dignity of individuals is at stake, but the quality of our democratic life.