Comment by tartakovsky

Comment by tartakovsky 3 days ago

11 replies

2-ish questions:

Is this level of fear typical or reasonable? If so, why doesn’t Anthropic / AI code gen providers offer this type of service? Hard to believe Anthropic is not secure in some sense — like what if Claude Code is already inside some container-like thing?

Is it actually true that Claude cannot bust out of the container?

pxc 3 days ago

> Is this level of fear typical or reasonable?

Just a month ago, an AI coding agent deleted all the files on someone's computer and there was a little discussion of it here on HN. Support's response was basically "yeah, this happens sometimes".

forum post: https://forum.cursor.com/t/cursor-yolo-deleted-everything-in...

HN thread (flagged, probably because it was a link to some crappy website that restates things from social media with no substantive content of its own): https://news.ycombinator.com/item?id=44262383

Idk how Claude Code works in particular, though.

  • wongarsu 3 days ago

    It's worth noting that the default settings of Cursor do prevent this by asking you to confirm every command that is run. And when you get tired of that 5 minutes in and switch to auto-approving there is still protection against files outside the work directory being deleted. The story above is about someone who disabled all the safeguards because they were inconvenient, then bad things happened

    It is a good example of "bad things can happen", but when talking about whether we need additional safeguards the lessons are less clear. And while I'm not as familiar with the safeguards of Claude Code I'm assured it also has some by default

tosh 3 days ago
kxrm 3 days ago

I haven't found that to be the case. I have used cc within an container and on the host machine and it has been fine. Any command that could cause changes to your system you MUST approve when using it in agent mode.

[removed] 3 days ago
[deleted]
Revisional_Sin 3 days ago

You either have the option of approving each command manually, or you can let it run commands autonomously. If you let it run any commands then you have the risk of it doing something stupid (mainly deleting files).

You also have MCP tools running on your machine, which might have security issues.

extr 3 days ago

I have personally never seen claude (or actually any AI agent) do anything that could not be fixed with git. I run 24/7 in full permissions bypass mode and hardly think about it.

  • swayson 3 days ago

    Correlation does not equal causation as the old adage goes. Just because if you havent seen the pattern, doesn't mean it can't.

    It is like insurance, 99.95% of the time you don't need it. But when you do, you wish you had it.

photonthug 3 days ago

> Is this level of fear typical or reasonable?

Anyone with more than one toolbox knows that fear isn't required. Containers are about more than security, including stuff like organization and portability.

> If so, why doesn’t Anthropic / AI code gen providers offer this type of service?

Well perhaps I'm too much the cynic, but I'm sure you can imagine why a lack of portability and reproducibility are things that are pretty good for vendors. A lack of transparency also puts the zealots for "100x!", and vendors, and many other people in a natural conspiracy together, and while it benefits them to drum up FOMO it makes everyone else burn time/money trying to figure out how much of the hype is real. People who are new to the industry get leverage by claiming all existing knowledge does not matter, workers who are experienced but looking to pivot into a new specialization in a tough job market benefit from making unverifiable claims, vendors make a quick buck while businesses buy-to-try and forget to cancel the contract, etc etc.

> Is it actually true that Claude cannot bust out of the container?

Escaping containers is something a lot of people in operations and security have spent a lot of time thinking about long before agents and AI. Container escape is possible and deadly serious, but not in this domain really, I mean all your banks and utility providers are probably using Kubernetes so compared to that who cares about maybe leaking source/destroying data on local dev machines or platforms trying to facilitate low-code apps? AI does change things slightly because people will run Ollama/MCP/IDEs on the host, and that's arguably some new surface area to worry about. Sharing sockets and files for inter-agent comms is going to be routine even if everyone says it's bad practice. But of course you could containerize those things too, add a queue, containerize unit-tests, etc

dannymi 3 days ago

>Is this level of fear typical or reasonable?

Of course. Also with regular customer projects. Even without AI--but of course having an idiot be able to execute commands on your PC makes the risk higher.

> If so, why doesn’t Anthropic / AI code gen providers offer this type of service?

Why? Separate the concerns. Isolation is a concern depending on my own risk appetite. I do not want stuff to decide on my behalf what's inside the container and what's outside. That said, they do have devcontainer support (like the article says).

>Hard to believe Anthropic is not secure in some sense — like what if Claude Code is already inside some container-like thing?

It's a node program. It does ask you about every command it's gonna execute before it does it, though.

>Is it actually true that Claude cannot bust out of the container?

There are (sporadic) container escape exploits--but it's much harder than not having a container.

You can also use a qemu vm. Good luck escaping that.

Or an extra user account--I'm thinking of doing that next.