Comment by sigmoid10

Comment by sigmoid10 5 days ago

23 replies

This is why you want containerisation or, even better, full virtualisation. Running programs built on node, python or any other ecosystem that makes installing tons of dependencies easy (and thus frustratingly common) on your main system where you keep any unrelated data is a surefire way to get compromised by the supply chain eventually. I don't even have the interpreters for python and js on my base system anymore - just so I don't accidentally run something in the host terminal that shouldn't run there.

Glemkloksdjf 5 days ago

No thats not what i want, that whats i need when i use something like npm.

Which can't be the right way.

  • ndriscoll 5 days ago

    Why not? Make a bash alias for `npm` that runs it with `bwrap` to isolate it to the current directory, and you don't have to think about it again. Distributions could have a package that does this by default. With nix, you don't even need npm in your default profile, and can create a sandboxed nix-shell on the fly so that's the only way for the command to even be available.

    Most of your programs are trusted, don't need isolation by default, and are more useful when they have access to your home data. npm is different. It doesn't need your documents, and it runs untrusted code. So add the 1 line you need to your profile to sandbox it.

  • godzillabrennus 5 days ago

    The right way (technically) and the commercially viable way are often diametrically opposed. Ship first, ask questions later, or, move fast and break things, wins.

naikrovek 5 days ago

Here I go again: Plan9 had per-process namespaces in 1995. The namespace for any process could be manipulated to see (or not see) any parts of the machine that you wanted or needed.

I really wish people had paid more attention to that operating system.

  • nyrikki 5 days ago

    The tooling for that exists today in Linux, and it is fairly easy to use with podman etc.

    K8s choices clouds that a little, but for vscode completions as an example, I have a pod, that systemd launches on request that starts it.

    I have nginx receive the socket from systemd, and it communicates to llama.cpp through a socket on a shared volume. As nginx inherits the socket from systemd it does have internet access either.

    If I need a new model I just download it to a shared volume.

    Llama.cpp has now internet access at all, and is usable on an old 7700k + 1080ti.

    People thinking that the k8s concept of a pod, with shared UTC, net, and IPC namespaces is all a pod can be confuses the issue.

    The same unshare command that runc uses is very similar to how clone() drops the parent’s IPC etc…

    I should probably spin up a blog on how to do this as I think it is the way forward even for long lived services.

    The information is out there but scattered.

    If it is something people would find useful please leave a comment.

    • naikrovek 5 days ago

      You are missing my point, maybe.

      Plan9 had this by default in 1995, no third party tools required. You launch a program, it gets its own namespace, by default it is a child namespace of whatever namespace launched the program.

      I should not have to read anything to have this. Operating systems should provide it by default. That is my point. We have settled for shitty operating systems because it’s easier (at first glance) to add stuff on top than it is to have an OS provide these things. It turns out this isn’t easier, and we’re just piling shit on top of shit because it seems like the easiest path forward.

      Look how many lines of code are in Plan9 then look at how many lines of code are in Docker or Kubernetes. It is probably easier to write operating systems with features you desire than it is to write an application-level operating system like Kubernetes which provide those features on top of the operating system. And that is likely due to application-scope operating systems like Kubernetes needing to comply with the existing reality of the operating system they are running on, while an actual operating system which runs on hardware gets to define the reality that it provides to applications which run atop it.

      • nyrikki 5 days ago

        You seem to have a misunderstanding of what namespaces accomplished on plan9, or that it was extending Unix concepts and assembling them in another way.

        As someone who actually ran plan9 over 30 years ago I ensure that if you go back and look at it, the namespaces were intended to abstract away the hardware limitations of the time, to build distributed execution contexts of a large assembly of limited resources.

        And if you have an issue with Unix sockets you would have hated it as it didn’t even have stalls and everything was about files.

        Today we have a different problem, where machines are so large that we have to abstract them into smaller chunks.

        Plan9 was exactly the opposite, when your local system CPU is limited you would run the cpu command and use another host, and guess what, it handed your file descriptors to that other machine.

        The goals of plan9 are dramatically different than isolation.

        But the OSes you seem to hate so much implemented many of the plan9 ideas, like /proc, union file systems, message passing etc.

        Also note I am not talking about k8s in the above, I am talking about containers and namespaces.

        K8s is an orchestrater, the kernel functionality may be abstracted by it, but K8s is just a user of those plan9 inspired ideas.

        Netns, pidns, etc… could be used directly, and you can call unshare(2)[0] directly, or use a cri like crun or use podman.

        Heck you could call the ip() command and run your app in an isolated namespace with a single command if you wanted to.

        You don’t need an api or K8s at all.

        [0] https://man7.org/linux/man-pages/man2/unshare.2.html

      • ElectricalUnion 5 days ago

        The fact that tools like docker, podman and bubblewrap exist and work points out that the OS supports it, but using the OS APIs directly sucks. Otherwise the only "safe" implementations of such features would need a full software VM.

        If using software securely was really a priority, everyone would be rustifing everything, and running everything in separated physical machines with restrictive AppArmor, SELinux, TOMOYO and Landlock profiles, with mTLS everywhere.

        It turns out that in Security, "availability" is a very important requirement, and "can't run your insecure-by-design system" is a failing grade.

        • naikrovek 4 days ago

          > The fact that tools like docker, podman and bubblewrap exist and work points out that the OS supports it

          Only via virtualization in the case of MacOS. Somehow, even windows has native container support these days.

          A much more secure system can be made I assure you. Availability is important, but an NPM package being able to scan every attached disk in its post-installation script and capture any clear text credentials it finds is crossing the line. This isn’t going to stop with NPM, either.

          One can have availability and sensible isolation by default. Why we haven’t chosen to do this is beyond me. How many people need to get ransomwared because the OS lets some crappy piece of junk encrypt files it should not even be able to see without prompting the user?

    • rafterydj 5 days ago

      This sounds very interesting to me. I'd read through that blog post, as I'm working on expanding my K8s skills - as you say knowledge is very scattered!

    • naikrovek 4 days ago

      > If it is something people would find useful please leave a comment.

      I would love to know.

gizmo686 4 days ago

That can only go so far. Assuming there is no container/VM escape, most software is built to get used. You can protect yourself from malicious dependencies in the build step. But at some point, you are going to do a production build, that needs to run on a production system, with access to production data. If you do not trust your supply chain; you need to fix that.

If you excuse me, I have a list of 1000 artifacts I need to audit before importing into our dependency store.

bitfilped 3 days ago

Containers don't help much when you deploy malware into your systems, containers are not and never will be security tools on Linux, they lack many needed primitives to be able and pull off that type of functionality.

wasmainiac 3 days ago

Why distro do you run? Python is a part of the os in many cases ?

It’s a fair angle your taking here, but I would only expect to see it on hardend servers.

estimator7292 5 days ago

Why think about the consequences of your actions when you can use docker?

rkagerer 4 days ago

Ok, but if you distrust the library so much it needs to go in a VM, what the hell are you doing shipping it to your customers?

baq 5 days ago

...but the github runners already are virtualized; you'd need to virtualize the secrets they have access to instead.