Red Hot Cyber
Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Cybersecurity is about sharing. Recognize the risk,
combat it, share your experiences, and encourage others
to do better than you.
Crowdstrike 320×100
Banner Desktop
China Regulates AI Chatbots with Human-like Interaction

China Regulates AI Chatbots with Human-like Interaction

1 January 2026 18:10

Chatbots are everywhere now. We’ve all encountered those virtual assistants that answer our questions, provide us with information, and sometimes seem downright intelligent.

But what happens when these chatbots become so advanced that they mimic human personality and empathize with us?

Well, in that case, things could really change. Yes, because if a chatbot can talk to you like a real person, adapt to your mood, and make you feel understood, well, it could become more than just a distraction, but a habit that’s hard to break.

And here’s the rub. Because while these chatbots can be truly useful and entertaining, they could also cause problems. Chinese authorities, for example, appear concerned about their safety and their impact on users, so much so that they’ve published a draft regulation to oversee these artificial intelligence services.

The regulation applies to all products and services publicly available in China that exhibit “human” traits, thought patterns, and communication styles and that interact with users through text, images, audio, video, and other formats. The Chinese regulator emphasized that these systems require distinct requirements from traditional systems, precisely because they can create a sense of personal connection and foster attachment. In short, chatbots are no longer just tools, but something more.

The draft proposes requiring developers and service owners to warn users of the risks of excessive use and to intervene if signs of dependency arise. It proposes that security responsibility should accompany a product throughout its entire lifecycle, from model development and training to operation. The document also includes requirements for internal algorithm testing procedures, data protection, and personal information protection.

Particular emphasis is placed on potential psychological consequences . Providers may be required to assess the user’s mood, emotions, and degree of dependence on the service. If a user exhibits extreme emotional reactions or behaviors consistent with dependence, the service must take “necessary measures” to intervene.

The draft also establishes restrictions on the content and behavior of such systems. Specifically, AI services must not generate content that threatens national security, spreads rumors, or encourages violence or obscenity. The document is currently open to the public, and depending on its outcome, the rules may be refined before their actual implementation.

Follow us on Google News to receive daily updates on cybersecurity. Contact us if you would like to report news, insights or content for publication.

  • AI Safety
  • artificial intelligence
  • chatbot security
  • China AI regulation
  • digital wellbeing
  • human-like chatbots
  • machine learning
  • Natural Language Processing
  • Tech News
  • user protection
Cropped RHC 3d Transp2 1766828557 300x300
The editorial staff of Red Hot Cyber is composed of IT and cybersecurity professionals, supported by a network of qualified sources who also operate confidentially. The team works daily to analyze, verify, and publish news, insights, and reports on cybersecurity, technology, and digital threats, with a particular focus on the accuracy of information and the protection of sources. The information published is derived from direct research, field experience, and exclusive contributions from national and international operational contexts.