Sam Altman Says Bots Are Making Social Media Feel Fake

Key Points
- Sam Altman posted on X expressing concern that bots are making social media feel fake.
- He pointed to a flood of Reddit posts about OpenAI Codex as potentially bot‑driven.
- Altman cited factors such as LLM‑style language, coordinated posting, platform engagement incentives, and astroturfing.
- Industry data shows over half of internet traffic in 2024 was non‑human, largely from AI outputs.
- The blend of human and AI content raises questions about authenticity, creator earnings, and coordinated messaging.
OpenAI chief Sam Altman posted on X that the surge of bots is turning social media into a landscape where it’s hard to tell if posts are written by real people. He cited the flood of Reddit posts about OpenAI’s Codex as an example, noting that many may be generated by automated accounts or astroturfed campaigns. Altman highlighted broader industry data indicating that a majority of internet traffic is non‑human, raising concerns about authenticity and the influence of AI‑driven content on online discourse.
OpenAI founder Sam Altman took to X to voice a growing unease about the authenticity of social media conversations. He observed that the volume of Reddit posts praising OpenAI’s Codex program had become so high that it was difficult to determine whether the comments were coming from genuine users or automated bots. Altman suggested that the mix of real users adopting the language patterns of large language models, coordinated posting behavior, platform incentives for engagement, and possible astroturfing were all contributing to a sense that the online dialogue felt "fake."
Altman’s remarks came after he noticed that the r/Claudecode subreddit was filled with self‑identified Codex users announcing migrations to the platform. He wondered how many of those posts were authentic, noting that the proliferation of bots could be inflating the apparent enthusiasm for the service.
Industry Data on Bot Traffic
Supporting his concerns, Altman referenced industry findings that more than half of all internet traffic in 2024 was identified as non‑human, largely driven by large language model outputs. This statistic underscores the scale at which automated content is now present across digital channels.
Potential Implications
The chief executive warned that the blending of human and AI‑generated content could distort public perception, affect creator monetization, and amplify coordinated messaging efforts. He also hinted that astroturfing—where paid individuals or bots promote a narrative on behalf of a third party—might be influencing the conversation around OpenAI’s products.
Altman’s comments reflect broader anxieties within the tech community about the impact of increasingly sophisticated AI on the credibility of online platforms. As large language models become more adept at mimicking human writing, distinguishing genuine user sentiment from automated amplification becomes a growing challenge for both platforms and consumers.