Comment by dap

Comment by dap a day ago

15 replies

At my last job, we only updated dependencies when there was a compelling reason. It was awful.

What would happen from time to time was that an important reason did come up, but the team was now many releases behind. Whoever was unlucky enough to sign up for the project that needed the updated dependency now had to do all those updates of the dependency, including figuring out how they affected a bunch of software that they weren't otherwise going to work on. (e.g., for one code path, I need a bugfix that was shipped three years ago, but pulling that into my component affects many other code paths.) They now had to go figure out what would break, figure out how to test it, etc. Besides being awful for them, it creates bad incentives (don't sign up for those projects; put in hacks to avoid having to do the update), and it's also just plain bad for the business because it means almost any project, however simple it seems, might wind up running into this pit.

I now think of it this way: either you're on the dependency's release train or you jump off. If you're on the train, you may as well stay pretty up to date. It doesn't need to be every release the minute it comes out, but nor should it be "I'll skip months of work and several major releases until something important comes out". So if you decline to update to a particular release, you've got to ask: am I jumping off forever, or am I just deferring work? If you think you're just deferring the decision until you know if there's a release worth updating to, you're really rolling the dice.

(edit: The above experience was in Node.js. Every change in a dynamically typed language introduces a lot of risk. I'm now on a team that uses Rust, where knowing that the program compiles and passes all tests gives us a lot of confidence in the update. So although there's a lot of noise with regular dependency updates, it's not actually that much work.)

iainmerrick 13 hours ago

Yes, I’ve had exactly the same experience. Once you get off the dependency train, it’s almost impossible to get back on.

I don’t think this is specific to any one language or environment, it just gets more difficult the larger your project is and the longer you go without updating dependencies.

I’ve experienced this with NPM projects, with Android projects, and with C++ (neglecting to merge upstream changes from a private fork).

It does seem likely that dynamic languages make this problem worse, but I don’t think very strict statically typed languages completely avoid it.

lock1 a day ago

I think it also depends on the community as well. Last time I touched Node.js and Javascript-related things, every time I tried to update something, it practically guaranteed something would explode for no reason.

While my recent legacy Java project migration from JDK 8 -> 21 & a ton of dependency upgrades has been a pretty smooth experience so far.

  • Terr_ a day ago

    Yeah, along with any community's attitudes to risk and quality, there is also a varying, er, chronological component.

    I'd prefer to upgrade around the time most of the nasty surprises have already been discovered by somebody else, preferably with workarounds developed.

    At the same time, you don't want to be so far back that upgrading uncovers novel migration problems, or issues that nobody else cares about anymore.

  • zdc1 13 hours ago

    Yeah, the JavaScript/Node.js ecosystem is pain. Lots of tooling (ORMs, queue/workflow frameworks, templating) is new-ish or quickly changing. I've also had minor updates cause breakages; semver is best-effort at best.

    I don't like Java but sometimes I envy their ecosystem.

    • kakacik 5 hours ago

      There is a reason most of stable companies use Java - stability. Outside of startups and SV, there are few reasons to avoid such a robust system.

      Plus you can find endless stream of experienced devs for it. Which are more stable job wise than those who come & go every 6-12 months. Stability. Top management barely cares for anything else from IT.

JoshTriplett a day ago

> I'm now on a team that uses Rust, where knowing that the program compiles and passes all tests gives us a lot of confidence in the update.

That's been my experience as well. In addition, the ecosystem largely holds to semver, which means a non-major upgrade tends to be painless, and conversely, if there's a major upgrade, you know not to put it off for too long because it'll involve some degree of migration.

  • darccio 16 hours ago

    It's a similar experience in Go, specially because imports are done by URL and major versions higher than v1.x are forced to change it to add a suffix `/vN` at the end.

    Although this is true, any large ecosystem will have some popular packages not holding to semver properly. Also, the biggest downside is when your `>=v1` depends - indirectly usually - on a `v0` dependency which is allowed to do breaking changes.

xp84 21 hours ago

OP wisely said for critical vulnerabilities is where the actual exposure needs to be assessed, in order to make an exception from a rule like “install the latest release of things that’s been published for X length of time.”

For instance if you use a package that provides a calendar widget and your app uses only the “western” calendar and there is a critical vulnerability that only manifests in the Islamic calendar, you have zero reason to worry about an exploit.

I see this as a reasonable stance.

  • samus 19 hours ago

    That's indeed reasonable, but the opposite can happen just as well: there is a vulnerability in the western calendar, but I'm on an old major.minor version that receives no security patches anymore. So now I have to upgrade that dependency, potentially triggering an avalanche of incompatibilities with other packages, leading to further upgrades and associated breakages. Oopsie.

cncjchsue7 a day ago

This is why I don't use dependencies that break backwards compatibility.

If you break my code I'm not wasting time fixing what you broke, I'm fixing the root cause of the bug: finding your replacement.

coredog64 a day ago

My current employer publishes "staleness" metrics at the project level. It's imperfect because it weights all the dependencies the same, but it's better than nothing.

  • m000 12 hours ago

    I wonder, are there tools to help you automate this? I.e. to assign a value to the staleness of each package instead of simple "oudated" boolean, and also a weight to each package.

    E.g. something like:

      pkg_staleness = (1 * missed_patch + 5 * missed_minor + 10 * missed_major) * (month_diff(installed_release_date, latest_release_date)/2)^2
    
      proj_staleness = sum(pkg.staleness * pkg.weight for pkg in all_packages)
ozim a day ago

Update at least quarterly so you don’t have them stale an super hard to update