Comment by Nextgrid
Comment by Nextgrid 2 days ago
What problem is this trying to solve exactly?
If a computer (or “agent” in modern terms) wants to order you a pizza it can technically already do so.
The reason computers currently can’t order us pizza or book us flights isn’t because of a technical limitation, it’s because the pizza place doesn’t want to just sell you a pizza and the airline doesn’t want to just sell you a flight. Instead they have an entire payroll of people whose salaries are derived from wasting human time, more commonly know as “engagement”. In fact those people will get paid regardless if you actually buy anything, so their incentive is often to waste more of your time even if it means trading off an actual purchase.
The “malicious” uses of AI that this very article refers to are mostly just that - computers/AI agents acting on behalf of humans to sidestep the “wasting human time” issue. The fact that agents may issue more requests than a human user is because information is intentionally not being presented to them in a concise, structured manner. If Dominos or Pizza Hut wanted to sell just pizzas tomorrow they can trivially publish an OpenAPI spec for agents to consume, or even collaborate on an HPOP protocol (Hypertext Pizza Ordering Protocol) to which HPOP clients can connect (no LLMs needed even). But they don’t, because wasting human time is the whole point.
So why would any of these companies suddenly opt into this system? Companies that are after actual money and don’t profit from wasting human time are already ready and don’t have to do anything (if an AI agent is already throwing Bitcoin or valid credit card details at you to buy your pizzas, you are fine), and those that do have zero incentive to opt in since they’d be trading off “engagement” for old-school, boring money (who needs that nowadays right?).
I understood this as a tool to fight bot-net scraping. I imagined that this would add accountability to clients for how many requests they make.
I know that phrasing it like "large company cloudflare wants to increase internet accountability" will make many people uncomfortable. I think caution is good here. However, I also think that the internet has a real accountability problem that deserves attention. I think that the accountability problem is so bad, that some solution is going to end up getting implemented. That might mean that the most pro-freedom approach is to help design the solution, rather than avoiding the conversation.
Bad ideas:
You're getting lots of bot requests, so you start demanding clients login to view your blog. It's anti-user, anti-privacy, very annoying, readership drops, everyone is sad.
Instead, what if your browser included your government id in every request automatically? Anti-user, anti-privacy, no browser would implement it.
This idea:
But ARC is a middle ground. Subsets of the internet band together (in this case, via cloudflare) and strike a compromise with users. Individual users need to register with cloudflare, and then cloudflare gives you a million tokens per month to request websites. Or some scheme like this. I assume that it would be sufficiently pro-social that the IETF and browsers all agree to it and it's transparent & completely privacy-respecting to normal users.
We already sort of have some accountability: it's "proof of bandwidth" and "proof of multiple unique ip addresses", but that's not well tuned. In fact, IP addresses destroy privacy for most people, while doing very little to stop bot-nets.