Foundation warns of existential risk as generative AI diverts users from critical knowledge source.
Nguyen Hoai Minh
•
18 days ago
•
The Wikimedia Foundation is sounding a clear alarm bell, stating unequivocally that the proliferation of AI bots and automated summaries is actively undermining Wikipedia's vital traffic. This isn't just about website numbers; it's about the very health of open knowledge, a point Senior Director of Product, Marshall Miller, recently highlighted. The foundation has observed an 8 percent year-over-year drop in page views, a significant decline attributed directly to generative AI and how people are now seeking information.
This isn't some theoretical concern; the numbers are pretty stark. After implementing better bot detection earlier in 2025, Wikimedia could finally get a clearer picture of human traffic versus the incessant hum of AI crawlers. And what they found? A quantifiable 8% dip in page views. Miller points to large language model (LLM) chatbots and AI-generated snippets in search results as the primary culprits. They're pulling information, often directly from Wikipedia, and serving it up to users without requiring a click-through to the source. It’s convenient for the user, sure, but it's a direct bypass of the wellspring of that knowledge. And, frankly, it's impacting Wikipedia in a big way.
Think about it. We've all seen those AI overviews in search, right? They give you an answer right at the top. Handy. But if that answer comes from Wikipedia, and you don't click to verify or learn more, Wikipedia loses a page view. This isn't just a handful of queries; AI summaries now handle a quarter of informational searches globally. No wonder traffic is hurting.
It's a tricky situation because the internet has always had bots, but these new AI crawlers are a different beast. Their sophistication makes distinguishing them from human visitors a continuous challenge, and they're consuming server resources, too. Reports suggest these AI bots now account for 15-20% of Wikipedia's server requests, a sharp rise from previous years. So, not only are they siphoning off traffic, but they're also adding to infrastructure costs. It's a double whammy for a non-profit organization that relies on community and voluntary contributions.
The irony isn't lost here, is it? Wikipedia's vast, human-curated knowledge base is what often trains these very LLMs. But when those LLMs then cannibalize the source’s traffic, it creates an unsustainable loop.
What's the solution then? Miller isn't asking for AI to disappear. Instead, he's advocating for better digital citizenship from these platforms. "For people to trust information shared on the internet, platforms should make it clear where the information is sourced from and elevate opportunities to visit and participate in those sources," he writes. It's about intentionality. Give users the chance to engage directly with the original content, to see the full context, to explore further. This is not some abstract idea; even Google's AI Overviews now include expandable source links, a small step perhaps. Perplexity AI, for example, is also trying to differentiate itself with a 'source boost' feature.
It's a sensible proposition. If AI is going to lean so heavily on Wikipedia, the least it could do is consistently point users back. Otherwise, we risk a future where a handful of AI companies become the sole arbiters of knowledge, divorcing information from its nuanced, transparent, and verifiable origins. And for a truly open web, that’s a scary thought.