
Chatbots are everywhere now. We’ve all encountered those virtual assistants that answer our questions, provide us with information, and sometimes seem downright intelligent.
But what happens when these chatbots become so advanced that they mimic human personality and empathize with us?
Well, in that case, things could really change. Yes, because if a chatbot can talk to you like a real person, adapt to your mood, and make you feel understood, well, it could become more than just a distraction, but a habit that’s hard to break.
And here’s the rub. Because while these chatbots can be truly useful and entertaining, they could also cause problems. Chinese authorities, for example, appear concerned about their safety and their impact on users, so much so that they’ve published a draft regulation to oversee these artificial intelligence services.
The regulation applies to all products and services publicly available in China that exhibit “human” traits, thought patterns, and communication styles and that interact with users through text, images, audio, video, and other formats. The Chinese regulator emphasized that these systems require distinct requirements from traditional systems, precisely because they can create a sense of personal connection and foster attachment. In short, chatbots are no longer just tools, but something more.
The draft proposes requiring developers and service owners to warn users of the risks of excessive use and to intervene if signs of dependency arise. It proposes that security responsibility should accompany a product throughout its entire lifecycle, from model development and training to operation. The document also includes requirements for internal algorithm testing procedures, data protection, and personal information protection.
Particular emphasis is placed on potential psychological consequences . Providers may be required to assess the user’s mood, emotions, and degree of dependence on the service. If a user exhibits extreme emotional reactions or behaviors consistent with dependence, the service must take “necessary measures” to intervene.
The draft also establishes restrictions on the content and behavior of such systems. Specifically, AI services must not generate content that threatens national security, spreads rumors, or encourages violence or obscenity. The document is currently open to the public, and depending on its outcome, the rules may be refined before their actual implementation.
Follow us on Google News to receive daily updates on cybersecurity. Contact us if you would like to report news, insights or content for publication.
