Comment by TIPSIO
Have you ever used any Anthropic AI product? You cannot literally do anything without big permissions, warnings, or annoying always-on popup warning you about safety.
Have you ever used any Anthropic AI product? You cannot literally do anything without big permissions, warnings, or annoying always-on popup warning you about safety.
No, of course not. Well.. apart from their API. That is a useful thing.
But you're missing the point. It is doing all this stuff with user consent, yes. It's just that the user fundamentally cannot provide informed consent as they seem to be out of their minds.
So yeah, technically, all those compliance checkboxes are ticked. That's just entirely irrelevant to the point I am making.
> It's just that the user fundamentally cannot provide informed consent
The user is an adult. They are capable of consenting to whatever they want, no matter how irrational it may look to you.
You just said the user is incapable of providing informed consent.
In any context, I really dislike software that prevents me from doing something dangerous in order to "protect" me. That's how we get iOS.
The user is an adult, they can consent to this if they want to. If Anthropic is using dark patterns to trick them that's a different story--that wouldn't be informed consent--but I don't think that's happening here?
Claude code has a YOLO mode, and from what I've seen a lot of heavy users, use it.
Fundamentally any security mechanism which relies on users to read and intelligently respond to approval prompts is doomed to fail over time, even if the prompts are well designed. Approval fatigue will kick in and people will just start either clicking through without reading, or prefer systems that let them disable the warnings (just as YOLO mode is a thing in Claude code)