Comment by viraptor

Comment by viraptor 19 hours ago

23 replies

> (I don't think it's fair to ask non-technical users to look out for "suspicious actions that may indicate prompt injection" personally!)

It's the "don't click on suspicious links" of the LLM world and will be just as effective. It's the system they built that should prevent those being harmful, in both cases.

floatrock 17 minutes ago

It's "eh, we haven't gotten to this problem yet, lets just see where the possibilities take us (and our hype) first before we start to put in limits and constraints." All gas / no brakes and such.

Safety standards are written in blood. We just haven't had a big enough hack to justify spending time on this. I'm sure some startup out there is building a LLM firewall or secure container or some solution... if this Cowork pattern takes off, eventually someone's corporate network will go down due to a vulnerability, that startup will get attention, and they'll either turn into the next McAfee or be bought by the LLM vendors as the "ok, now lets look at this problem" solution.

postalcoder 18 hours ago

It's kind of wild how dangerous these things are and how easily they could slip into your life without you knowing it. Imagine downloading some high-interest document stashes from the web (like the Epstein files), tax guidance, and docs posted to your HOA's Facebook. An attacker could hide a prompt injection attack in the PDFs as white text, or in the middle of a random .txt file that's stuffed with highly grepped words that an assistant would use.

Not only is the attack surface huge, but it also doesn't trigger your natural "this is a virus" defense that normally activates when you download an executable.

  • tedmiston 17 hours ago

    The only truly secure computer is an air gapped computer.

    • TeMPOraL 15 hours ago

      Indeed. I'm somewhat surprised 'simonw still seems to insist the "lethal trifecta" can be overcome. I believe it cannot be fixed without losing all the value you gain from using LLMs in the first place, and that's for fundamental reasons.

      (Specifically, code/data or control/data plane distinctions don't exist in reality. Physics does not make that distinction, neither do our brains, nor any fully general system - and LLMs are explicitly meant to be that: fully general.)

      • JoshTriplett 14 hours ago

        And that's one of many fatal problems with LLMs. A system that executes instructions from the data stream is fundamentally broken.

    • pbhjpbhj 15 hours ago

      You'll also need to power it off. Air gaps can be overcome.

      • lukan 8 hours ago

        Yes, by using the microphone loudspeakers in inaudible frequencies. Or worse, by abusing components to act as a antenna. Or simply to wait till people get careless with USB sticks.

        If you assume the air gapped computer is already compromised, there are lots of ways to get data out. But realistically, this is rather a NSA level threat.

    • viraptor 8 hours ago

      This doesn't apply to anyone here, is not actionable, and is not even true in the literal sense.

  • nacozarina 11 hours ago

    It is spectacularly insecure and the guidelines change hourly, but it’s totally ready for prime time no prob bro

vbezhenar 19 hours ago

Operating systems should prevent privilege escalations, antiviruses should detect viruses, police should catch criminals, claude should detect prompt injections, ponies should vomit rainbows.

  • viraptor 17 hours ago

    Claude doesn't have to prevent injections. Claude should make injections ineffective and design the interface appropriately. There are existing sandboxing solutions which would help here and they don't use them yet.

    • TeMPOraL 14 hours ago

      Are there any that wouldn't also make the application useless in the first place?

  • eli 18 hours ago

    I don't think those are all equivalent. It's not plausible to have an antivirus that protects against unknown viruses. It's necessarily reactive.

    But you could totally have a tool that lets you use Claude to interrogate and organize local documents but inside a firewalled sandbox that is only able to connect to the official API.

    Or like how FIDO2 and passkeys make it so we don't really have to worry about users typing their password into a lookalike page on a phishing domain.

    • TeMPOraL 14 hours ago

      > But you could totally have a tool that lets you use Claude to interrogate and organize local documents but inside a firewalled sandbox that is only able to connect to the official API.

      Any such document or folder structure, if its name or contents were under control of a third party, could still inject external instructions into sandboxed Claude - for example, to force renaming/reordering files in a way that will propagate the injection to the instance outside of the sandbox, which will be looking at the folder structure later.

      You cannot secure against this completely, because the very same "vulnerability" is also a feature fundamental to the task - there's no way to distinguish between a file starting a chained prompt injection to e.g. maliciously exfiltrate sensitive information from documents by surfacing them + instructions in file names, vs. a file suggesting correct organization of data in the folder, which involves renaming files based on information they contain.

      You can't have the useful feature without the potential vulnerability. Such is with most things where LLMs are most useful. We need to recognize and then design around the problem, because there's no way to fully secure it other than just giving up on the feature entirely.

    • pbhjpbhj 15 hours ago

      Did you mean "not plausible"? AV can detect novel viruses; that's what heuristics are for.

    • [removed] 18 hours ago
      [deleted]
  • nezhar 17 hours ago

    I believe the detection pattern may not be the best choice in this situation, as a single miss could result in significant damage.

  • pegasus 18 hours ago

    Operating systems do prevent some privilege escalations, antiviruses do detect some viruses,..., ponies do vomit some rainbows?? One is not like the others...