Comment by postalcoder

Comment by postalcoder 18 hours ago

12 replies

It's kind of wild how dangerous these things are and how easily they could slip into your life without you knowing it. Imagine downloading some high-interest document stashes from the web (like the Epstein files), tax guidance, and docs posted to your HOA's Facebook. An attacker could hide a prompt injection attack in the PDFs as white text, or in the middle of a random .txt file that's stuffed with highly grepped words that an assistant would use.

Not only is the attack surface huge, but it also doesn't trigger your natural "this is a virus" defense that normally activates when you download an executable.

tedmiston 17 hours ago

The only truly secure computer is an air gapped computer.

  • TeMPOraL 15 hours ago

    Indeed. I'm somewhat surprised 'simonw still seems to insist the "lethal trifecta" can be overcome. I believe it cannot be fixed without losing all the value you gain from using LLMs in the first place, and that's for fundamental reasons.

    (Specifically, code/data or control/data plane distinctions don't exist in reality. Physics does not make that distinction, neither do our brains, nor any fully general system - and LLMs are explicitly meant to be that: fully general.)

    • JoshTriplett 14 hours ago

      And that's one of many fatal problems with LLMs. A system that executes instructions from the data stream is fundamentally broken.

      • TeMPOraL 14 hours ago

        That's not a bug, that's a feature. It's what makes the system general-purpose.

        Data/control channel separation is an artificial construct induced mechanically (and holds only on paper, as long as you're operating within design envelope - because, again, reality doesn't recognize the distinction between "code" and "data"). If such separation is truly required, then general-purpose components like LLMs or people are indeed a bad choice, and should not be part of the system.

        That's why I insist that anthropomorphising LLMs is actually a good idea, because it gives you better high-order intuition into them. Their failure modes are very similar to those of people (and for fundamentally the same reasons). If you think of a language model as tiny, gullible Person on a Chip, it becomes clear what components of an information system it can effectively substitute for. Mostly, that's the parts of systems done by humans. We have thousands of years of experience building systems from humans, or more recently, mixing humans and machines; it's time to start applying it, instead of pretending LLMs are just regular, narrow-domain computer programs.

  • pbhjpbhj 15 hours ago

    You'll also need to power it off. Air gaps can be overcome.

    • lukan 8 hours ago

      Yes, by using the microphone loudspeakers in inaudible frequencies. Or worse, by abusing components to act as a antenna. Or simply to wait till people get careless with USB sticks.

      If you assume the air gapped computer is already compromised, there are lots of ways to get data out. But realistically, this is rather a NSA level threat.

  • viraptor 8 hours ago

    This doesn't apply to anyone here, is not actionable, and is not even true in the literal sense.

nacozarina 11 hours ago

It is spectacularly insecure and the guidelines change hourly, but it’s totally ready for prime time no prob bro