Redazione RHC : 20 October 2025 07:14
At first glance, the email seemed flawless.
A well-structured PagoPA payment reminder , with formal language, references to the Highway Code and even a blue “Access the PagoPA Portal” button identical to the real one.
A masterpiece of social engineering, sent to us by Paolo Ben , so carefully crafted that it appears authentic even to the most attentive eyes.
But then, like in a comic sketch, something broke.
Towards the end of the message, after the usual warnings about deadlines and penalties, a … surreal section appears. Among the instructions “for better Gmail reception,” the email begins to mention SPF, DKIM, DMARC , mass mailing tests, and even mail-tester.com , a site used to check if a newsletter ends up in spam.
In short, the author of the message – evidently a somewhat absent-minded criminal – had forgotten to delete part of the prompt that his AI assistant had generated for him.
A mistake that reveals everything: the email was not only a scam, but built with the help of a language model , probably an LLM (Large Language Model) like ChatGPT as all cybercriminals do today.
And just like many hasty users, the scammer copied and pasted the text “as is,” leaving behind technical suggestions for improving the deliverability of their campaign.
Basically, the machine told him “now add these settings to avoid spam”… and he obediently left it there.
The episode shows a comical side of modern cybercrime: even criminals make mistakes, using AI to write more convincing emails, but they don’t always fully understand it.
And so, while trying to appear more intelligent, they end up self-sabotaging themselves with the digital equivalent of a banana peel. This is not an isolated case.
A few weeks ago, a well-known Italian paper magazine published an article (in print) that ended with a sentence that no one should have read:
Do you want me to turn it into a newspaper article (with a headline, subheading, and journalistic layout) or into a more narrative version for an investigative magazine?
It was the trail left by a ChatGPT prompt.
Same dynamic: a careless mistake, a too-quick copy-and-paste, and the machine reveals its hand.
Artificial intelligence is powerful, but not infallible.
And cybercriminals, however advanced, remain human: lazy, distracted and sometimes even comical.
If it weren’t for the risk these scams pose, we’d almost be grateful: every now and then, they provide us with a little moment of digital comedy amidst the sea of phishing.
Do you want me to turn this article into a viral story for Google News?
You fell for it, huh?