Comment by TIPSIO

Comment by TIPSIO 21 hours ago

9 replies

Have you ever used any Anthropic AI product? You cannot literally do anything without big permissions, warnings, or annoying always-on popup warning you about safety.

raesene9 20 hours ago

Claude code has a YOLO mode, and from what I've seen a lot of heavy users, use it.

Fundamentally any security mechanism which relies on users to read and intelligently respond to approval prompts is doomed to fail over time, even if the prompts are well designed. Approval fatigue will kick in and people will just start either clicking through without reading, or prefer systems that let them disable the warnings (just as YOLO mode is a thing in Claude code)

  • TIPSIO 20 hours ago

    Yes it basically does! My point was that I really doubt Anthropic will miss making it clear to users that this is manipulating their computer

    • fragmede 15 hours ago

      Users are asking it to manipulate their computer for them, so I don't think that parts being lost.

hypfer 21 hours ago

No, of course not. Well.. apart from their API. That is a useful thing.

But you're missing the point. It is doing all this stuff with user consent, yes. It's just that the user fundamentally cannot provide informed consent as they seem to be out of their minds.

So yeah, technically, all those compliance checkboxes are ticked. That's just entirely irrelevant to the point I am making.

  • Wowfunhappy 20 hours ago

    > It's just that the user fundamentally cannot provide informed consent

    The user is an adult. They are capable of consenting to whatever they want, no matter how irrational it may look to you.

    • hypfer 20 hours ago

      Uh, yes?

      What does that refute?

      • Wowfunhappy 20 hours ago

        You just said the user is incapable of providing informed consent.

        In any context, I really dislike software that prevents me from doing something dangerous in order to "protect" me. That's how we get iOS.

        The user is an adult, they can consent to this if they want to. If Anthropic is using dark patterns to trick them that's a different story--that wouldn't be informed consent--but I don't think that's happening here?