Comment by grayhatter

Comment by grayhatter 2 days ago

6 replies

> You don’t think people are trying very hard to understand LLMs? We recognize the value of interpretability. It is just not an easy task.

I think you're arguing against a tangential position to both me, and the person this directly replies to. It can be hard to use and understand something, but if you have a magic box that you can't tell if it's working. It doesn't belong anywhere near the systems that other humans use. The people that use the code you're about to commit to whatever repo you're generating code for, all deserve better than to be part of your unethical science experiment.

> It’s not the first time in human history that our ability to create things has exceeded our capacity to understand.

I don't agree this is a correct interpretation of the current state of generative transformer based AI. But even if you wanted to try to convince me; my point would still be, this belongs in a research lab, not anywhere near prod. And that wouldn't be a controversial idea in the industry.

adastra22 2 days ago

We used the steam engine for 100 years before we had a firm understanding of why it worked. We still don’t understand how ice skating works. We don’t have a physical understanding of semi-fluid flow in grain silos, but we’ve been using them since prehistory.

I could go on and on. The world around you is full of not well understood technology, as well as non deterministic processes. We know how to engineer around that.

  • grayhatter 2 days ago

    > We used the steam engine for 100 years before we had a firm understanding of why it worked. We still don’t understand how ice skating works. We don’t have a physical understanding of semi-fluid flow in grain silos, but we’ve been using them since prehistory.

    I don't think you and I are using the same definition for "firm understanding" or "how it works".

    > I could go on and on. The world around you is full of not well understood technology, as well as non deterministic processes. We know how to engineer around that.

    Again, you're side stepping my argument so you can restate things that are technically correct, but not really a point in of themselves. I see people who want to call themselves software engineers throw code they clearly don't understand against the wall because the AI said so. There's a significant delta between knowing you can heat water to turn it into a gas with increased pressure that you can use to mechanically turn a wheel, vs, put wet liquid in jar, light fire, get magic spinny thing. If jar doesn't call you a funny name first, that's bad!

    • adastra22 2 days ago

      > I don't think you and I are using the same definition for "firm understanding" or "how it works".

      I’m standing in firm ground here. Debate me in the details if you like.

      You are constructing a strawman.

nineteen999 2 days ago

> It doesn't belong anywhere near the systems that other humans use

Really for those of us who actually work in critical systems (emergency services in my case) - of course we're not going to start patching the core applications with vibe code.

But yeah, that frankenstein reporting script that half a dozen amateur hackers made a mess of over 20 years instead of refactoring and redesigning? That's prime fodder for this stuff. NOBODY wants to clean that stuff up by hand.

  • grayhatter 2 days ago

    > Really for those of us who actually work in critical systems (emergency services in my case) - of course we're not going to start patching the core applications with vibe code.

    I used to believe that no one would seriously consider this too... but I don't believe that this is a safe assumption anymore. You might be the exception, but there are many more people who don't consider the implications of turning over said intellectual control.

    > But yeah, that frankenstein reporting script that half a dozen amateur hackers made a mess of over 20 years instead of refactoring and redesigning? That's prime fodder for this stuff. NOBODY wants to clean that stuff up by hand.

    It's horrible, no one currently understands it, so let the AI do it, so that still, no one will understand it, but at least this one bug will be harder to trigger.

    I don't agree that harder to trigger bugs are better than easy to trigger bugs. And from my view, the argument that "it's currently broken now, and hard to fix!" Isn't exactly an argument I find compelling for leaving it that way.

    • nineteen999 a day ago

      > I used to believe that no one would seriously consider this too... but I don't believe that this is a safe assumption anymore. You might be the exception, but there are many more people who don't consider the implications of turning over said intellectual control.

      Then they'll pay for it when something goes wrong with their systems with their job etc. You need a different mindset in this particular segment industry - %99.999 uptime is everything (we actually have a %100 uptime for the past 6 years on our platform - chasing that last 0.001 is hard, and something will _eventually_ hit us).

      > It's horrible, no one currently understands it, so let the AI do it, so that still, no one will understand it, but at least this one bug will be harder to trigger.

      I think you're commenting without context. It's a particular nasty Perl script that's been duct taped to shell scripts and bolted hard on to a Proprietary Third Party application which needs to go - having Claude/GPT rewrite that in a modern language, spending some time on it to have it design proper interfaces and API's around where the script needs to interface other things when nobody wants to touch the code would be the greatest thing that can happen to it.

      You still have the old code to test, so have the agent run exhaustive testing on its implementation to prove that its robust, or more so than the original. It's not rocket surgery.