Redazione RHC : 4 October 2025 08:32
The Cyberspace Administration of China has announced the launch of a two-month nationwide campaign, which began on July 24 , aimed at countering the spread of false information published by so-called self-media .
The initiative, titled “Clear and Clean: Correcting ‘Self-Media’ from Publishing False Information,” aims to regulate the functioning of these platforms, cracking down on malicious manipulation, distortion of facts, and misleading speculation.
One of the program’s central aspects involves the use of artificial intelligence to create synthetic content, impersonate other people, or fabricate fake news related to sensitive social issues. In recent years, the rapid development of AI technologies has revolutionized the circulation of information, but has also raised new challenges.
A recent example occurred on July 20 , when news began circulating of the alleged capsizing of a cruise ship in Yichang, Hubei province , with numerous passengers in the water.
After further investigation, it was determined that it was fake news generated by AI , accompanied by digitally manipulated images to make it more credible.
Compared to traditional fake news, those produced by AI are more difficult to recognize: texts, photos, and videos appear extremely realistic, to the point that ordinary citizens cannot verify their authenticity with common sense alone.
In sensitive sectors such as public safety or emergency management, such content can trigger mass panic and disrupt daily life.
Another risk factor is the low cost and high efficiency with which AI can generate large amounts of misinformation. This phenomenon undermines trust in the internet, reduces the space for quality content, and undermines the healthy development of the digital industry.
Countering artificial disinformation is not easy: content evolves rapidly, blurring the lines between fact and fiction, while identifying sources remains complex. The Cyberspace Administration’s campaign therefore includes a series of targeted interventions:
In addition to regulations, platforms will have to invest in technological development to improve their ability to recognize and block disinformation, reducing its spread at the source. At the same time, authorities are calling for strengthening public education programs to increase citizens’ awareness and skills in identifying fake news.
Experts emphasize that fighting AI-based disinformation requires consistency and collaboration. The current campaign, while extraordinary, represents a step toward more stable governance, capable of moving from sporadic interventions to preventive and institutionalized strategies.
Success will depend on the joint work of regulators, digital platforms, industry associations, and user communities . Only a shared approach can ensure a more reliable and secure cyberspace.