Redazione RHC : 10 September 2025 09:18
We are making giant strides towards the true ouroboros, or the snake that eats its own tail. We talked about it a few weeks ago that human traffic on the internet is declining dramatically compared to bot traffic, which today exceeds 50% of total traffic.
Sam Altman, CEO of OpenAI and Reddit shareholder, confessed that the “AI Twitter” and “AI Reddit” feeds seem increasingly unnatural to him, to the point of becoming a real alarm bell for him.
The signal came from the subreddit r/Claudecode, where in recent days many users have reported switching to Codex, OpenAI’s programming service launched in May to compete with Anthropic’s Claude Code. The waves of nearly identical posts made Altman suspect he was dealing with bot-generated content, even though he knew Codex’s growth was real and that there was a genuine community dynamic behind it.
Analyzing his perception, Altman identified several factors: language increasingly similar to that of AI models, “always-online” communities that move en masse with uniform tones, emotional swings typical of hype—from total pessimism to euphoria—and the amplification provided by recommendation algorithms and the monetization of platforms. Added to this are covert campaigns orchestrated by companies and, of course, the presence of real bots.
The paradox is evident: OpenAI models were trained on Reddit content and designed to appear human, even imitating punctuation marks like em dashes. Altman, who was also a member of Reddit’s board of directors and is still a shareholder, thus finds himself dealing with an ecosystem where authenticity and automation are blurred.
Doubts increased after the launch of GPT-5, when the OpenAI communities on Reddit and X criticized the model, accusing it of consuming too many credits and failing to complete certain tasks. Altman responded with an AMA on r/GPT, acknowledging the issues and promising fixes. But community trust hasn’t returned to its previous levels. This raises a crucial question: where does genuine user reaction end and the reflexive nature of automatically generated text begin?
The bot phenomenon is now systemic. According to Imperva, by 2024, more than half of internet traffic was not from humans, and a significant portion was made up of LLM-based systems. Grok, X’s AI, also estimates the presence of hundreds of millions of bot accounts, though without providing details. In this scenario, some observers interpret Altman’s words as a hint at a possible “OpenAI social network,” the first tests of which are said to have already emerged in 2025, although it’s unclear whether the project will become a reality or whether it will truly guarantee a bot-free space.
Meanwhile, scientific research shows that even communities composed entirely of bots tend to reproduce human dynamics: a study by the University of Amsterdam has shown that these networks end up dividing into clans and echo chambers. The line between human and artificial writing, therefore, is becoming increasingly blurred. Altman simply points to the symptom of a deeper problem: platforms where people and models imitate each other end up speaking with a single voice. And distinguishing authenticity becomes increasingly difficult, driving up the price of trust, even when there’s real growth behind it.