Get the latest tech news
AI crawlers cause Wikimedia Commons bandwidth demands to surge 50%
The Wikimedia Foundation says bandwidth consumption for multimedia downloads has surged by 50% since January, 2024.
The reason, the outfit wrote in a blog post Tuesday, isn’t due to growing demand from knowledge-thirsty humans, but from automated, data-hungry scrapers looking to train AI models. The long and short of all this is that the Wikimedia Foundation’ site reliability team are having to spend a lot of time and resources blocking crawlers to avert disruption for regular users. Last month, software engineer and open source advocate Drew DeVault bemoaned the fact that AI crawlers ignore “robots.txt” files that are designed to ward off automated traffic.
Or read this on TechCrunch