
Sam Altman, the founder of OpenAI and one of the most influential figures in the AI industry, has raised concerns about the state of social media. According to him, bots are increasingly making online platforms feel “fake” and unreliable. His remarks, shared on X (formerly Twitter), highlight the growing difficulty of distinguishing between authentic human content and posts created or amplified by AI.
Altman’s reflections came after he read discussions on the r/Claudecode subreddit, dedicated to Anthropic’s Claude Code. Recently, the community has been flooded with posts from users claiming to have switched to OpenAI’s competing tool, Codex. The repeated nature of these posts even led one Reddit user to joke: “Can one switch to Codex without posting a topic on Reddit?”
Although Altman acknowledged that Codex adoption has been strong, he admitted that he still felt as though he was surrounded by bots. “It’s all fake/bots, I think, but I know Codex growth is real,” he wrote, expressing how hard it has become to tell human content apart from automated or manipulated activity.
Why Social Media Feels Artificial
Altman identified several factors that make today’s online platforms feel less genuine:
- Human adoption of AI language: People now mimic the style of large language models (LLMs) after interacting with AI assistants.
- Community echo chambers: Active online groups often imitate each other, creating highly correlated behaviors.
- Extreme hype cycles: Tech fandoms rapidly swing between enthusiasm and disappointment, fueling groupthink.
- Platform incentives: Social media algorithms reward posts that maximize engagement, encouraging repetitive and formulaic content.
- Astroturfing campaigns: Companies or contractors may plant posts, making it difficult to trust organic discussions.
Altman summed it up bluntly: “AI Twitter/AI Reddit feels really fake in a way it didn’t a year or two ago.”
The Irony of AI-Created “Fake”
His warning carries irony since large language models such as OpenAI’s GPT were designed to mimic human communication. Many of these systems were trained on Reddit and other online platforms, shaping the very style of conversation that users now find indistinguishable from AI-generated text.
The problem is not limited to AI communities. Any online group can devolve into echo chambers or become saturated with low-quality content. Altman also highlighted how creator monetization programs push users to adopt behaviors that resemble bots, further complicating the sense of authenticity online.
Bots, Astroturfing, and Public Trust
Altman even suggested that some of the pro-OpenAI comments in subreddits might be the result of astroturfing — organized campaigns designed to appear like grassroots support. While no direct evidence has been presented, the suspicion reflects growing skepticism about online authenticity.
That skepticism is reinforced by data. Cybersecurity firm Imperva reported that over half of global internet traffic in 2024 came from non-human actors, including bots powered by AI. X’s AI assistant Grok recently estimated that the platform alone hosts hundreds of millions of bots.
The issue has also hit OpenAI communities directly. When GPT-5.0 launched, the reaction on Reddit and X was not widespread praise but waves of criticism about the model’s reliability, personality, and usage limits. Altman acknowledged these issues in a Reddit AMA, promising improvements. However, the backlash deepened the perception of “fake” or manipulated discussions.
Could OpenAI Launch a Bot-Free Network?
Altman’s remarks have sparked speculation about OpenAI’s rumored plans to build its own social media platform. Earlier this year, The Verge reported that the company was exploring ways to compete with X and Facebook, though details remain scarce.
If OpenAI does launch a network, could it realistically be free of bots? Past experiments suggest otherwise. Researchers at the University of Amsterdam once built a social network made entirely of bots, only to discover that they quickly formed cliques and echo chambers similar to humans.
The Bigger Question
Altman’s concerns highlight the paradox of today’s internet. Large language models, including those created by OpenAI, have blurred the line between human and machine-generated communication. Bots and algorithm-driven engagement make platforms feel increasingly artificial, while human users themselves adopt communication styles that resemble AI output.
The real issue may not be removing bots entirely, but rather redesigning platforms to promote authenticity, build trust, and reduce the pressure for engagement at any cost. Whether OpenAI will take a direct role in shaping the future of online communities is still unclear. Yet Altman’s comments show that even AI’s pioneers are uneasy about the digital ecosystems they helped create.