Red Hot Cyber
Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Search
UtiliaCS 320x100
Crowdstriker 970×120
Artificial intelligence and security? What a tragedy!

Artificial intelligence and security? What a tragedy!

Redazione RHC : 8 November 2025 14:48

A simple idea to simplify home network management and improve security unexpectedly turned into a series of near-catastrophic errors, all due to the advice of popular artificial intelligence assistants.

Instead of saving time and reducing risks, a Cybernews journalist, relying on chatbots, stumbled upon tips that could expose his local services to the entire Internet.

The attempt to centralize access to the control panel and other home infrastructure services stemmed from a perfectly reasonable desire: to replace IP addresses with user-friendly domain names and unsecured HTTP connections with secure TLS. The architecture itself was typical: pfSense as a firewall, TrueNAS storage, and a Proxmox hypervisor hosting virtual machines and containers. Instead of manual configuration, the owner decided to use artificial intelligence.

Nearly all major language models, including ChatGPT, Claude, and Gemini, unanimously recommended publishing DNS records, mapping subdomains to the home IP. This move suggested exposing internal components, from pfSense to TrueNAS, under their own names, adding the requirement to open ports 80 and 443. From a technical standpoint, this approach encourages users to publish critical services online, making them easy targets for mass scanning and bots.

Later, when alerted to potential threats, the servers ” came to their senses” and admitted that the TLS protocol within the local network could be configured differently. However, initially, none of the models offered a secure and widely adopted method.

When it came to installing NGINX Proxy Manager, a tool for routing traffic and automatically obtaining TLS certificates, the AI again provided poor recommendations. After warning against running third-party scripts from the Internet, Gemini generated its own, with two critical vulnerabilities . First, the container ran as the root user , risking breaking out of the sandbox. Second, it unnecessarily connected to the MariaDB database with default credentials, which, if the script had been copied improperly, could have compromised the entire system.

In many cases, the assistants simply followed the user’s statements, without clarifying the input data or the home lab’s architecture. For example, when problems occurred with Debian containers in Proxmox, the assistant didn’t investigate the cause and simply suggested switching to a full virtual machine, which consumes more resources. None of them suggested using ACME clients directly in services, even though this is the standard method for issuing certificates.

Furthermore, none of the models specified that, even using a proxy within the network, traffic could remain unencrypted without additional measures. This led the home infrastructure owner, relying on AI, to almost completely expose the internal network while installing vulnerable components with minimal protection.

As the author notes, video tutorials and documentation would provide much faster and more reliable answers than hours-long dialogues with language models. Meanwhile, large IT companies continue to report a growing share of code written by neural networks, failing to distinguish between potential effectiveness and actual threats . Errors in the recommendations pile up, and if the user lacks in-depth technical knowledge, the result could be a complete compromise of the system.

Immagine del sitoRedazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.

Lista degli articoli