Comment by Aachen
Huh? What false positives does Anubis produce?
The article doesn't say and I constantly get the most difficult Google captchas, cloudflare block pages saying "having trouble?" (which is a link to submit a ticket that seems to land in /dev/null), IP blocks because user agent spoofing, errors "unsupported browser" when I don't do user agent spoofing... the only anti-bot thing that reliably works on all my clients is Anubis. I'm really wondering what kinds of false positives you think Anubis has, since (as far as I can tell) it's a completely open and deterministic algorithm that just lets you in if you solve the challenge, and as the author of the article demonstrated with some C code (if you don't want to run the included JavaScript that does it for you), that works even if you are a bot. And afaik that's the point: no heuristics and false positives but a straight game of costs; making bad scraping behavior simply cost more than implementing caching correctly or using commoncrawl
I've had Anubis repeatedly fail to authorize me to access numerous open source projects, including the mesa3d gitlab, with a message looking something like "you failed".
As a legitimate open source developer and contributor to buildroot, I've had no recourse besides trying other browsers, networks, and machines, and it's triggered on several combinations.