Comment by phillipcarter

Comment by phillipcarter 19 hours ago

6 replies

> The main barrier is cost

I very much disagree. For the larger, more sophisticated stuff that runs our world, it is not cost that prohibits wide and deep automation. It's deeply sophisticated and constrained requirements, highly complex existing behaviors that may or may not be able to change, systems of people who don't always hold the information needed, usually wildly out of date internal docs that describe the system or even how to develop for it, and so on.

Agents are nowhere near capable of replacing this, and even if they were, they'd change it differently in ways that are often undesirable or illegal. I get that there's this fascination with "imagine if it were good enough to..." but it's not, and the systems AI must exist in are both vast and highly difficult to navigate.

ademup 18 hours ago

The status quo system you describe isn't objectively optimal. It sounds archaic to me. "We" would never intentionally design it this way if we had a fresh start. I believe it is this way due to a meriad of reasons, mostly stemming from the frailty and avarice of people.

I'd argue the opposite of your stance: we've never had a chance at a fresh start without destruction, but agents (or their near-future offspring) can hold our entire systems "in nemory", and therefore might be our only chance at a redo without literally killing ourselves to get there.

  • majormajor 16 hours ago

    It's not claimed to be an "objectively optimal" solution, it's claimed to represent how the world works.

    I don't know where you're going with discussion of destruction and killing, but even fairly simple consumer products have any number of edge cases that initial specifications rarely capture. I'm not sure what "objectively optimal" is supposed to mean here, either.

    If a spec described every edge case it would basically be executable already.

    The pain of developing software at scale is that you're creating the blueprint on the fly from high-level vague directions.

    Something trivial that nevertheless often results in meetings and debate in the development world:

    Spec requirement 1: "Give new users a 10% discount, but only if they haven't purchased in the last year."

    Spec requirement 2, a year later: "Now offer a second product the user can purchase."

    Does the 10% discount apply to the second product too? Do you get the 10% discount on the second product if you purchased the first product in the last year, or does a purchase on any product consume the discount eligibility? What if the prices are very different and customers would be pissed off if a $1 discount on the cheaper product (which didn't meet their needs in the end) prevented them from getting a 10$ discount 9 months later (which they think will)? What if the second product is a superset of the first product? What if there are different relevant laws in different jurisdictions where you're selling your product?

    Agents aren't going to figure out the intent of the company's principal's automatically here because the decision maker doesn't actually even realize it's a question until the implementers get into the weeds.

    A sufficiently advanced agent would present all the options to the person running the task, and then the humans could decide. But then you've slowed things back down the pace of the human decision makers.

    The complexities only increase as the product grows. And once you get into distributed or concurrent systems even most of our code today is ambiguous enough about intent that bugs are common.

  • phillipcarter 16 hours ago

    Agents quite literally cannot do this today.

    Additionally, I disagree with your point:

    > The status quo system you describe isn't objectively optimal.

    On the basis that I would challenge you or anyone to judge what is objectively optimal. Google Search is a wildly complex system, an iceberg or rules on top of rules specifically because it is a digital infrastructure surrounding an organic system filled with a diverse group of people with ever-changing preferences and behaviors. What, exactly, would be optimal here?

adidoit 17 hours ago

"deeply sophisticated and constrained requirements"

Yes this resonates completely. I think many are forgetting the purpose of formal language and code was because natural language has such high ambiguity that it doesn't capture complex behavior

LLMs are great at interpolating between implicit and unsaid requirements but whether their interpolation matches your mental model is a dice throw

orderone_ai 7 hours ago

Overall, I agree - it would take far more sophisticated and deterministic or 'logical' AI better capable of tracking constraints, knowing what to check and double check, etc... Right now, AI is far too scattered to pull that off (or, for the stuff that isn't scattered, it's largely just incapable), but a lot of smart people are thinking about it.

Imagine if...nevermind.

dboreham 6 hours ago

> they'd change it differently in ways that are often undesirable or illegal.

So...like SAP then?