Comment by qubex

Comment by qubex 5 days ago

34 replies

About a month ago I had a rather annoying task to perform, and I found an NPM package that handled it. I threw “brew install NPM” or whatever onto the terminal and watched a veritable deluge of dependencies download and install. Then I typed in ‘npm ’ and my hand hovered on the keyboard after the space as I suddenly thought long and hard about where I was on the risk/benefit curve and then I backspaced and typed “brew uninstall npm” instead, and eventually strung together an oldschool unix utilities pipeline with some awk thrown in. Probably the best decision of my life, in retrospect.

sigmoid10 5 days ago

This is why you want containerisation or, even better, full virtualisation. Running programs built on node, python or any other ecosystem that makes installing tons of dependencies easy (and thus frustratingly common) on your main system where you keep any unrelated data is a surefire way to get compromised by the supply chain eventually. I don't even have the interpreters for python and js on my base system anymore - just so I don't accidentally run something in the host terminal that shouldn't run there.

  • Glemkloksdjf 5 days ago

    No thats not what i want, that whats i need when i use something like npm.

    Which can't be the right way.

    • ndriscoll 5 days ago

      Why not? Make a bash alias for `npm` that runs it with `bwrap` to isolate it to the current directory, and you don't have to think about it again. Distributions could have a package that does this by default. With nix, you don't even need npm in your default profile, and can create a sandboxed nix-shell on the fly so that's the only way for the command to even be available.

      Most of your programs are trusted, don't need isolation by default, and are more useful when they have access to your home data. npm is different. It doesn't need your documents, and it runs untrusted code. So add the 1 line you need to your profile to sandbox it.

    • godzillabrennus 5 days ago

      The right way (technically) and the commercially viable way are often diametrically opposed. Ship first, ask questions later, or, move fast and break things, wins.

  • naikrovek 5 days ago

    Here I go again: Plan9 had per-process namespaces in 1995. The namespace for any process could be manipulated to see (or not see) any parts of the machine that you wanted or needed.

    I really wish people had paid more attention to that operating system.

    • nyrikki 5 days ago

      The tooling for that exists today in Linux, and it is fairly easy to use with podman etc.

      K8s choices clouds that a little, but for vscode completions as an example, I have a pod, that systemd launches on request that starts it.

      I have nginx receive the socket from systemd, and it communicates to llama.cpp through a socket on a shared volume. As nginx inherits the socket from systemd it does have internet access either.

      If I need a new model I just download it to a shared volume.

      Llama.cpp has now internet access at all, and is usable on an old 7700k + 1080ti.

      People thinking that the k8s concept of a pod, with shared UTC, net, and IPC namespaces is all a pod can be confuses the issue.

      The same unshare command that runc uses is very similar to how clone() drops the parent’s IPC etc…

      I should probably spin up a blog on how to do this as I think it is the way forward even for long lived services.

      The information is out there but scattered.

      If it is something people would find useful please leave a comment.

      • naikrovek 5 days ago

        You are missing my point, maybe.

        Plan9 had this by default in 1995, no third party tools required. You launch a program, it gets its own namespace, by default it is a child namespace of whatever namespace launched the program.

        I should not have to read anything to have this. Operating systems should provide it by default. That is my point. We have settled for shitty operating systems because it’s easier (at first glance) to add stuff on top than it is to have an OS provide these things. It turns out this isn’t easier, and we’re just piling shit on top of shit because it seems like the easiest path forward.

        Look how many lines of code are in Plan9 then look at how many lines of code are in Docker or Kubernetes. It is probably easier to write operating systems with features you desire than it is to write an application-level operating system like Kubernetes which provide those features on top of the operating system. And that is likely due to application-scope operating systems like Kubernetes needing to comply with the existing reality of the operating system they are running on, while an actual operating system which runs on hardware gets to define the reality that it provides to applications which run atop it.

      • rafterydj 5 days ago

        This sounds very interesting to me. I'd read through that blog post, as I'm working on expanding my K8s skills - as you say knowledge is very scattered!

      • naikrovek 4 days ago

        > If it is something people would find useful please leave a comment.

        I would love to know.

  • gizmo686 4 days ago

    That can only go so far. Assuming there is no container/VM escape, most software is built to get used. You can protect yourself from malicious dependencies in the build step. But at some point, you are going to do a production build, that needs to run on a production system, with access to production data. If you do not trust your supply chain; you need to fix that.

    If you excuse me, I have a list of 1000 artifacts I need to audit before importing into our dependency store.

  • bitfilped 3 days ago

    Containers don't help much when you deploy malware into your systems, containers are not and never will be security tools on Linux, they lack many needed primitives to be able and pull off that type of functionality.

  • wasmainiac 3 days ago

    Why distro do you run? Python is a part of the os in many cases ?

    It’s a fair angle your taking here, but I would only expect to see it on hardend servers.

  • estimator7292 5 days ago

    Why think about the consequences of your actions when you can use docker?

  • rkagerer 4 days ago

    Ok, but if you distrust the library so much it needs to go in a VM, what the hell are you doing shipping it to your customers?

  • baq 5 days ago

    ...but the github runners already are virtualized; you'd need to virtualize the secrets they have access to instead.

2OEH8eoCRo0 5 days ago

It's funny because techies love to tell people that common sense is the best antivirus, don't click suspicious links, etc. only to download and execute a laundry list of unvetted dependencies with a keystroke.

philipwhiuk 5 days ago

The lesson surely though is 'don't use web-tech, aimed at solving browser incompatibility issues for local scripting'.

When you're running NPM tooling you're running libraries primarily built for those problems, hence the torrent of otherwise unnecessary complexity of polyfills, that happen to be running on a JS engine that doesn't get a browser attached to it.

  • bakkoting 5 days ago

    Very few packages published on npm include polyfills, especially packages you'd use when doing local scripting.

    • jazzypants 4 days ago

      I'm sorry, but this is just incorrect. Have you ever heard of ljharb[0]? The NPM ecosystem is rife with polyfills[1]. I don't know how you can make a distinction on which libraries would be used for "local scripting" as I don't think many library authors make that distinction.

      [0] - TC39 member who is self-described as "obsessed with backwards compatibility": https://github.com/ljharb

      [1] - Here's one of many articles describing the situation: https://marvinh.dev/blog/speeding-up-javascript-ecosystem-pa...

      • bakkoting 4 days ago

        Yes. I'm on TC39 as well, and I've talked to Jordan about this topic.

        It's true that there are a few people who publish packages on npm including polyfills, Jordan among them. But these are a very small fraction of all packages on npm, and none of the compromised packages were polyfills. Also, he cares about backwards compatibility _with old versions of node_; the fact that JavaScript was originally a web language, as the grandparent comment says, is completely irrelevant to the inclusion of those specific polyfills.

        Polyfills are just completely irrelevant to this discussion.

        • jazzypants 4 days ago

          Fair enough. Thank you for the clarification, and I apologize for not recognizing your status as a TC39 member.

kubafu 5 days ago

Same story from a month ago. The moment I saw the sheer number of dependencies artillery wanted to pull I gave up.

jwr 5 days ago

I used to run npm only inside docker containers, and I've been regularly laughed at on these forums. I eventually gave up…

  • qubex 5 days ago

    “Whenever you find yourself on the side of the majority, it is time to pause and reflect." — Mark Twain (supposedly)