The Wikimedia Foundation, the non-profit organization behind Wikipedia and its sister projects, has reported a significant operational challenge. Since the beginning of January 2024, the bandwidth required for multimedia downloads from Wikimedia Commons, its vast repository of freely licensed images, sounds, and videos, has dramatically increased by 50%. This surge places considerable strain on the foundation's resources, which rely heavily on donations to maintain global access to free knowledge. Interestingly, this sharp rise in data consumption isn't driven by a growing public appetite for the educational and cultural content hosted on Commons. Instead, as detailed in a recent blog post by the foundation and reported by TechCrunch, the primary driver is the relentless activity of artificial intelligence crawlers. These automated bots systematically scrape massive amounts of data, including the multimedia files on Commons, likely to train large language models (LLMs) and other AI systems. The sheer volume of downloads initiated by these crawlers far surpasses typical human usage patterns. The implications of this trend are multifaceted. On one hand, it highlights the immense value of Wikimedia Commons as a diverse, multilingual, and freely accessible dataset, making it an attractive target for AI developers seeking training material. The content, contributed by volunteers worldwide, covers a vast range of subjects and is often accompanied by useful metadata. On the other hand, the uncontrolled scraping activity consumes substantial bandwidth, translating directly into increased operational costs for the Wikimedia Foundation. This situation raises critical questions about the responsibilities of AI companies utilizing open-access resources and the sustainability of platforms that provide this data. This surge underscores a growing tension between the open knowledge movement, which champions free access and reuse of information, and the burgeoning AI industry's voracious need for data. While Wikimedia's content licenses generally permit reuse, the scale and nature of automated scraping by AI crawlers present unprecedented challenges. The foundation has not yet detailed specific countermeasures, but potential avenues could include refining crawler access policies, implementing stricter rate limits, or seeking partnerships with AI companies to ensure fair and sustainable use of its resources. The situation necessitates a dialogue about how large-scale data consumers can contribute to the upkeep of the digital commons they benefit from. Ultimately, the 50% bandwidth spike at Wikimedia Commons serves as a stark indicator of the hidden infrastructural costs associated with the rapid expansion of artificial intelligence. It emphasizes the need for greater transparency from AI developers regarding their data sources and scraping practices, and prompts a broader discussion on establishing equitable frameworks for accessing and utilizing shared digital resources like those hosted by the Wikimedia Foundation. Ensuring the long-term health and accessibility of these vital knowledge repositories requires addressing the impact of automated systems proactively.