Comment by tptacek

Comment by tptacek 2 days ago

20 replies

This came up before (and this post links to the Tavis Ormandy post that kicked up the last firestorm about Anubis) and without myself shading the intent or the execution on Anubis, just from a CS perspective, I want to say again that the PoW thing Anubis uses doesn't make sense.

Work functions make sense in password hashes because they exploit an asymmetry: attackers will guess millions of invalid passwords for every validated guess, so the attacker bears most (really almost all) of the cost.

Work functions make sense in antispam systems for the same reason: spam "attacks" rely on the cost of an attempt being so low that it's efficient to target millions of victims in the expectation of just one hit.

Work functions make sense in Bitcoin because they function as a synchronization mechanism. There's nothing actually valorous about solving a SHA2 puzzle, but the puzzles give the whole protocol a clock.

Work functions don't make sense as a token tax; there's actually the opposite of the antispam asymmetry there. Every bot request to a web page yields tokens to the AI company. Legitimate users, who far outnumber the bots, are actually paying more of a cost.

None of this is to say that a serious anti-scraping firewall can't be built! I'm fond of pointing to how Youtube addressed this very similar problem, with a content protection system built in Javascript that was deliberately expensive to reverse engineer and which could surreptitiously probe the precise browser configuration a request to create a new Youtube account was using.

The next thing Anubis builds should be that, and when they do that, they should chuck the proof of work thing.

mariusor 2 days ago

With all due respect, but almost all I see in this thread is people looking down their nose at a proven solution, and giving advice instead of doing the work. I can see how you are a _very important person_ with bills to pay and money to make, but at least have the humility of understanding that the solution we got is better than the solution that could be better if only there was someone else to think of it and build it.

  • tptacek 2 days ago

    You can't moralize a flawed design into being a good one.

    • mariusor 2 days ago

      How about into a "good enough one"?

      • tptacek 2 days ago

        Look, I don't care if you run Anubis. I'm not against "Anubis". I'm interested in the computer science of the current Anubis implementation. It's not great. It doesn't make sense. Those are descriptive observations, and you can't moralize them into being false; you need to present an actual argument.

  • yumechii 2 days ago

    [dead]

    • mariusor 2 days ago

      It's weird that you get offended by something which was not directed at you.

      "The work" is providing those better alternatives to anubis, that everyone in this thread except for Xe seem to know all about.

      The humility is about accepting the fact that the solution works for some people, the small site operators that get hammered by DDoSes and unethical LLM over crawling, despite not being perfect. And if that inconveniences you as a user of those sites - which I imagine is what you mean by "user backlash", the solution for you is to stop going there, not talk down at them for doing something about an issue that impacts them.

      • yumechii 2 days ago

        How am I offended? Did I accuse you of anything? I didn't even accuse Anubis of anything. You asked for the work, I post the work and evidence to ground the discussion in "work", as you demanded.

gucci-on-fleek 2 days ago

> Work functions don't make sense as a token tax; there's actually the opposite of the antispam asymmetry there. Every bot request to a web page yields tokens to the AI company. Legitimate users, who far outnumber the bots, are actually paying more of a cost.

Agreed, residential proxies are far more expensive than compute, yet the bots seem to have no problem obtaining millions of residential IPs. So I'm not really sure why Anubis works—my best guess is that the bots have some sort of time limit for each page, and they haven't bothered to increase it for pages that use Anubis.

> with a content protection system built in Javascript that was deliberately expensive to reverse engineer and which could surreptitiously probe the precise browser configuration a request to create a new Youtube account was using.

> The next thing Anubis builds should be that, and when they do that, they should chuck the proof of work thing.

They did [0], but it doesn't work [1]. Of course, the Anubis implementation is much simpler than YouTube's, but (1) Anubis doesn't have dozens of employees who can test hundreds of browser/OS/version combinations to make sure that it doesn't inadvertently block human users, and (2) it's much trickier to design an open-source program that resists reverse-engineering than a closed-source program, and I wouldn't want to use Anubis if it went closed-source.

[0]: https://anubis.techaro.lol/docs/admin/configuration/challeng...

[1]: https://github.com/TecharoHQ/anubis/issues/1121

  • tptacek 2 days ago

    Google's content-protection system didn't simply make sure you could run client-side Javascript. It implemented an obfuscating virtual machine that, if I'm remembering right (I may be getting some of the detailed blurred with Blu Ray's BD+ scheme) built up a hash input of runtime artifacts. As I understand it, it was one person's work, not the work of a big team. The "source code" we're talking about here is clientside Javascript.

    Either way: what Anubis does now --- just from a CS perspective, that's all --- doesn't make sense.

Gander5739 2 days ago

But youtube can still be scraped with yt-dlp, so apparently it wasn't enough.

  • tptacek 2 days ago

    Preventing that wasn't the objective of the content-protection system. You'll have to go read up on it.