Comment by zaptrem
If people are truly concerned about the crawlers hammering their 128mb raspberry pi website then a better solution would be to provide an alternative way for scrapers to access the data (e.g., voluntarily contribute a copy of their public site to something like common crawl).
If Anubis blocked crawler requests but helpfully redirected to a giant tar ball of every site using their service (with deltas or something to reduce bandwidth) I bet nobody would bother actually spending the time to automate cracking it since it’s basically negative value. You could even make it a torrent so most of the be costs are paid by random large labs/universities.
I think the real reason most are so obsessed with blocking crawlers is they want “their cut”… an imagined huge check from OpenAI for their fan fiction/technical reports/whatever.
No, this doesn’t work. Many of the affected sites have these but they’re ignored. We’re talking about git forges, arguably the most standardised tool in the industry, where instead of just fetching the repository every single history revision of every single file gets recursively hammered to death. The people spending the VC cash to make the internet unusable right now don’t know how to program. They especially don’t give a shit about being respectful. They just hammer all the sites, all the time, forever.