OpenAI CEO observes a shift in online discourse, citing potential AI influence on human communication patterns.
HM Journal
•
about 2 months ago
•

OpenAI CEO Sam Altman has voiced a peculiar observation: people are starting to sound like the very AI models they interact with. In a recent post on X, Altman shared his "strangest experience" reviewing online forum discussions about OpenAI's new coding tool, Codex. He noted a distinct shift in online communication, suggesting that the digital landscape is becoming increasingly saturated with content that feels, in his words, "very fake."
Altman's concerns extend beyond mere speculation. He believes this trend is not just a fleeting anomaly but a reflection of how AI's influence is subtly altering human communication patterns. This phenomenon raises intriguing questions about the future of online interaction and the authenticity of information shared across platforms.
Altman elaborated on his observations, proposing several reasons for this emergent "LLM-speak." One significant factor, he suggests, is that real people are inadvertently adopting the linguistic quirks and stylistic patterns characteristic of large language models. It's as if the AI's way of communicating is seeping into our own, creating a subtle, almost imperceptible mimicry.
Furthermore, Altman pointed to the "Extremely Online" crowd, noting how this group tends to converge on similar communication styles, amplifying any AI-influenced language. This collective drift, he implies, contributes to a homogenized online discourse.
Another contributing factor, according to Altman, is the inherent extremism often found in the hype cycles surrounding new technologies. The "it's so over/we're so back" mentality prevalent in discussions about AI tools can push content towards sensationalism, making it harder to discern genuine human expression from AI-generated or AI-influenced text. The pressure on social platforms to maximize engagement, coupled with creator monetization strategies, also plays a role, incentivizing content that might be more easily produced or amplified by AI.
Altman's sentiment isn't an isolated one. He recalled a similar observation made just last week, where he mused about the "dead internet theory" and the proliferation of what he termed "LLM-run twitter accounts." This suggests a growing awareness within the tech community about the potential for AI to distort online authenticity.
The observations have resonated with other prominent figures in the tech industry. Paul Graham, cofounder of the startup incubator Y Combinator, confirmed experiencing the same trend on X. Graham elaborated that the issue isn't solely about state-sponsored disinformation campaigns; it also involves a surge of content from "individual would-be influencers" leveraging AI.
This growing concern about "AI slop"—low-quality, AI-generated content—is not limited to social media. Chris Best, CEO of Substack, recently commented on the potential for AI to flood the media landscape with engagement-baiting content. He warned of a future where "sophisticated AI goon bots" could saturate online spaces, making it difficult for users to find genuine, human-created material. Best's stark prediction paints a picture of a digital world where discerning truth from AI-generated noise becomes an increasingly challenging task.
Altman's commentary highlights a critical juncture in our digital evolution. As AI tools become more sophisticated and integrated into our daily lives, the lines between human and machine-generated content blur. This raises profound questions about the nature of online authenticity and the potential impact on genuine human connection and discourse.
The "strangest experience" Altman describes is a symptom of a larger societal shift. It forces us to consider how we consume information, whom we trust online, and what constitutes genuine interaction in an increasingly AI-influenced world. The ease with which AI can generate plausible text means that even seemingly authentic posts might be subtly shaped or entirely created by algorithms.
This trend isn't just about fake accounts or bots; it's about the very fabric of online communication potentially being rewoven with AI's linguistic threads. As more content is generated, curated, or amplified by AI, the challenge for users will be to maintain critical thinking and a discerning eye, questioning the origin and intent behind the information they encounter. The future of online authenticity, it seems, hinges on our ability to navigate this evolving landscape with a healthy dose of skepticism and a commitment to seeking out genuine human voices.