Comment by jstummbillig

Comment by jstummbillig 2 days ago

73 replies

Because nobody actually wants a "web app". People want food, love, sex or: solutions.

You or your coworker are not a web app. You can do some of the things that web apps can, and many things that a web app can't, but neither is because of the modality.

Coded determinism is hard for many problems and I find it entirely plausible that it could turn out to be the wrong approach in software, that is designed to solve some level of complex problems more generally. Average humans are pretty great at solving a certain class of complex problems that we tried to tackle unsuccessfully with many millions lines of deterministic code, or simply have not had a handle on at all, like (like build a great software CEO).

latexr a day ago

> Because nobody actually wants a "web app". People want food, love, sex or: solutions.

Talk about a nonsensical non-sequitur, but I’ll bite. People want those to be deterministic too, to a large extent.

When people cook a meal with the same ingredients and the same times and processes (like parameters to a function), they expect it to taste about the same, they never expect to cook a pizza and take a salad out of the oven.

When they have sex, people expect to ejaculate and feel good, not have their intercourse morph into a drag race with a clown half-way though.

And when they want a “solution”, they want it to be reliable and trustworthy, not have it shit the bed unpredictably.

  • mavamaarten 21 hours ago

    Exactly this. The perfect example is Google Assistant for me. It's such a terrible service because it's so indeterministic. One day it happily answers your basic question with a smile, and when you need it most it doesn't even try and only comes up with "Sorry I don't understand".

    When products have limitations, those are usually acceptable to me if I know what they are or if I can find out what the breaking point is.

    If the breaking point was me speaking a bit unclearly, I'd speak more clearly. If the breaking point was complex questions, I'd ask simpler ones. If the breaking point is truly random, I simply stop using the service because it's unpredictable and frustrating.

  • tomcam 12 hours ago

    > When they have sex, people expect to ejaculate and feel good, not have their intercourse morph into a drag race with a clown half-way though.

    speak for yourself

  • pempem 21 hours ago

    Ways to start my morning...reading "When they have sex, people expect to ejaculate and feel good, not have their intercourse morph into a drag race with a clown half-way though."

    Stellar description.

  • davnicwil 20 hours ago

    This thing of 'look, nobody cares about the details really, they just care about the solution' is a meme that I think will be here forever in software. It was here before LLMs, they're now just the current socially accepted legitimacy vehicle for the meme.

    In the end, useful stuff is built by people caring about the details. This will always be true. I think in LLMs and broadly AI people see an escape valve from that where the thinking about the details can be taken off their hands, and that's appealing, but it won't work in exactly the same way that having a human take the details off your hands doesn't usually work that well unless you yourself understand the details to a large extent (not necessarily down to the atoms, but at the point of abstraction where it matters, which in software is mostly about deterministically how do the logic flows of the thing actually work and why).

    I think a lot of people just don't intuit this. An illustrative analogy might be something else creative, like music. Imagine the conversation where you're writing a song and discussing some fine point of detail like the lyrics, should I have this or that line in there, and ask someone's opinion, and their answer is 'well listen, I don't really know about lyrics and all of that, but I know all that really matters in the end is the vibe of the song'. That contributes about the same level of usefulness as talking about how software users are ultimately looking for 'solutions' without talking about the details of said software.

    • mojoe 11 hours ago

      Exactly, in the long run it's the people who care the most who win, it's tautological

113 2 days ago

> Because nobody actually wants a "web app". People want food, love, sex or: solutions.

Okay but when I start my car I want to drive it, not fuck it.

  • jstummbillig 2 days ago

    Most of us actually drive a car to get somewhere. The car, and the driving, are just a modality. Which is the point.

    • kennywinker 13 hours ago

      If this was a good answer to mobility, people would prefer the bus over their car. It’s non-deterministic - when will it come? How quick will i get there? Will i get to sit? And it’s operated by an intelligent agent (driver).

      Every reason people prefer a car or bike over the bus is a reason non-deterministic agents are a bad interface.

      And that analogy works as a glimpse into the future - we’re looking at a fast approaching world where LLMs are the interface to everything for most of us - except for the wealthy, who have access to more deterministic services or actual human agents. How long before the rich person car rental service is the only one with staff at the desk, and the cheaper options are all LLM based agents? Poor people ride the bus, rich people get to drive.

      • aryehof 7 hours ago

        Bus vs car hit home for me as a great example of non vs deterministic.

        It has always seemed to me that workflow or processes need to be deterministic and not decided by an LLM.

    • 63stack a day ago

      Most of us actually want to get somewhere to do an activity. The getting there is just a modality.

      • jermaustin1 a day ago

        Most of us actually want to get some where to do an activity to enjoy ourselves. The getting there, and activity, are just modalities.

    • GTP a day ago

      But I want that somewhere to be deterministic, i.e. I want to arrive to the place I choose. With this kind of non-determinism instead, I have a big chance of getting to the place I choose. But I will also every now and then end up in a different place.

    • 113 2 days ago

      Yeah but in this case your car is non-deterministic so

      • mikodin a day ago

        Well the need is to arrive where you are going.

        If we were in an imagined world and you are headed to work

        You either walk out your door and there is a self driving car, or you walk out of your door and there is a train waiting for you or you walk out of your door and there is a helicopter or you walk out of your door and there is a literal worm hole.

        Let's say all take the same amount of time, are equally safe, same cost, have the same amenities inside, and "feel the same" - would you care if it were different every day?

        I don't think I would.

        Maybe the wormhole causes slight nausea ;)

      • chii a day ago

        > your car is non-deterministic

        it's not as far as your experience goes - you press pedal, it accelerates. You turn the steering, it goes the way it turns. What the car does is deterministic.

        More importantly, it does this every time, and the amount of turning (or accelerating) is the same today as it was yesterday.

        If an LLM interpreted those inputs, can you say with confidence, that you will accelerate in a way that you predicted? If that is the case, then i would be fine with an LLM interpreted input to drive. Otherwise, how do you know, for sure, that pressing the brakes will stop the car, before you hit somebody in front of you?

        of course, you could argue that the input is no longer your moving the brake pads etc - just name a destination and you get there, and that is suppose to be deterministic, as long as you describe your destination correctly. But is that where LLM is at today? or is that the imagined future of LLMs?

      • nurettin a day ago

        I mean, as long as it works and it is still technically "my car", I would welcome the change.

  • lambdaone a day ago

    Sadly, this is not true of a (admittedly very small) number of individuals.

  • ozim 2 days ago

    I feel like this is the point where we start to make jokes about Honda owners.

    • bfkwlfkjf a day ago

      Go on, what about honda owners? I don't know the meme.

      • hathawsh a day ago

        The "Wham Baam" YouTube channels have a running joke about Hondas bumping into other cars with concerning frequency.

  • hinkley 2 days ago

    Christine didn’t end well for anyone.

  • OJFord 2 days ago

    ...so that you can get to the supermarket for food, to meet someone you love, meet someone you may or may not love, or to solve the problem of how to get to work; etc.

    Your ancestors didn't want horses and carts, bicycles, shoes - they wanted the solutions of the day to the same scenarios above.

    • sublinear 2 days ago

      As much as I love your point, this is where I must ask whether you even want a corporeal form to contain the level of ego you're describing. Would you prefer to be an eternal ghost?

      To dismiss the entire universe and its hostilities towards our existence and the workarounds we invent in response as mere means to an end rather than our essence is truly wild.

      • anonzzzies a day ago

        Most people need to go somewhere (in a hurry) to make money or food etc which most people don't want to do if they didn't have to, so yeah it is mostly a means to an end.

  • stirfish 2 days ago

    But do you want to drive, or do you want to be wherever you need to be to fuck?

    • codebje a day ago

      For me personally, the latter, but there's definitely people out there that just love driving.

      Either way, these silly reductionist games aren't addressing the point: if I just want to get from A to B then I definitely want the absolute minimum of unpredictability in how I do it.

      • theendisney a day ago

        That would ruin the brain placticity.

        I wonder now, if everything is always different and suddenly every day would be the same. How many times as terrifying would that be compared to the opposite?

      • mewpmewp2 a day ago

        Only because you think the driving is what you want. The point is that what you want is determined by our brain chemicals. Many steps could be skipped if we could just give you the chemicals in your brain that you craved.

  • lazide 2 days ago

    Even if it purred real nice when it started up? (I’m sorry)

    • ozim a day ago

      Looks like we have a Civic owner xD

cheema33 2 days ago

> Average humans are pretty great at solving a certain class of complex problems that we tried to tackle unsuccessfully with many millions lines of deterministic code..

Are you suggesting that an average user would want to precisely describe in detail what they want, every single time, instead of clicking on a link that gives them what they want?

  • ethmarks 2 days ago

    No, but the average user is capable of describing what they want to something trained in interpreting what users want. The average person is incapable of articulating the exact steps necessary to change a car's oil, but they have no issue with saying "change my car's oil" to a mechanic. The implicit assumption with LLM-based backends is that the LLM would be capable of correctly interpreting vague user requests. Otherwise it wouldn't be very useful.

    • sarchertech 2 days ago

      The average mechanic won’t do something completely different to your car because you added some extra filler words to your request though.

      The average user may not care exactly what the mechanic does to fix your car, but they do expect things to be repeatable. If car repair LLMs function anything like coding LLMs, one request could result in an oil change, while a similar request could end up with an engine replacement.

      • ethmarks 2 days ago

        I think we're making similar points, but I kind of phrased it weirdly. I agree that current LLMs are sensitive to phrasing and are highly unpredictable and therefore aren't useful in AI-based backends. The point I'm making is that these issues are potentially solvable with better AI and don't philosophically invalidate the idea of a non-programmatic backend.

        One could imagine a hypothetical AI model that can do a pretty good job of understanding vague requests, properly refusing irrelevant requests (if you ask a mechanic to bake you a cake he'll likely tell you to go away), and behaving more or less consistently. It is acceptable for an AI-based backend to have a non-zero failure rate. If a mechanic was distracted or misheard you or was just feeling really spiteful, it's not inconceivable that he would replace your engine instead of changing your oil. The critical point is that this happens very, very rarely and 99.99% of the time he will change your oil correctly. Current LLMs have far too high of a failure rate to be useful, but having a failure rate at all is not a non-starter for being useful.

        • sarchertech a day ago

          All of that is theoretically possible. I’m doubtful that LLMs will be the thing that gets us to that though.

          Even if it is possible, I’m not sure if we will ever have the compute power to run all or even a significant portion of the world’s computations through LLMs.

      • array_key_first a day ago

        Mechanics, and humans, are non-deterministic. Every mechanic works differently, because they have different bodies and minds.

        LLMs are, of course, bad. Or not good enough, at least. But suppose they are. Suppose they're perfect.

        Would I rather use an app or just directly interface with an LLM? The LLM might be quicker and easier. I know, for example, ordering takeout is much faster if I just call and speak to a person.

  • anonzzzies a day ago

    There would be bookmarks to prompts and the results of the moment would be cached : both of these are already happening and will get better. We probably will freeze and unfreeze parts of neural nets to just get to that point and even mix them up to quickly mix up different concept you described before and continue from there.

  • samdoesnothing 2 days ago

    I think they're suggesting that some problems are trivially solvable by humans but extremely hard to do with code - in fact the outcome can seem non-deterministic despite it being deterministic because there are so many confounding variables at play. This is where an LLM or other for of AI could be a valid solution.

Aerroon 2 days ago

When I reach for a hammer I want it to behave like a hammer every time. I don't ever want the head to fly off the handle or for it to do other things. Sometimes I might wish the hammer were slightly different, but most of the time I would want it to be exactly like the hammer I have.

Websites are tools. Tools being non-deterministic can be a really big problem.

majormajor 2 days ago

Companies want determinism. And for most things, people want predictability. We've spent a century turning people into robots for customer support, assembly lines, etc. Very few parts of everyday life that still boil down to "make a deal with the person you're talking to."

So even if it would be better to have more flexibility, most business won't want it.

  • pigpop 2 days ago

    Why sell to a company when you can replace it?

    I can speculate about what LLM-first software and businesses might look like and I find some of those speculations more attractive than what's currently on offer from existing companies.

    The first one, which is already happening to some degree on large platforms like X, is LLM powered social media. Instead of having a human designed algorithm handle suggestions you hand it over to an LLM to decide but it could go further. It could handle customizing the look of the client app for each user, it could provide goal based suggestions or search so you could tell it what type of posts or accounts you're looking for or a reason you're looking for them e.g. "I want to learn ML and find a job in that field" and it gives you a list of users that are in that field, post frequent and high quality educational material, have demonstrated willingness to mentor and are currently not too busy to do so as well as a list of posts that serve as a good starting point, etc.

    The difference in functionality would be similar to the change from static websites to dynamic web apps. It adds even more interactivity to the page and broadens the scope of uses you can find for it.

    • majormajor 2 days ago

      Sell to? I'm talking about buying from. How are you replacing your grocery store, power company, favorite restaurants, etc, with an LLM? Things like vertical integration and economies of scale are not going anywhere.

pepoluan 12 hours ago

The issue with not having something deterministic is that when there's regression, you cannot surgically fix the regression. Because you can't know how "Plan A" got morphed into "Modules B, C, D, E, F, G," and so on.

And don't even try to claim there won't ever be any regression: Current LLM-based A.I. will 'happily' lie to you that they passed all tests -- because based on interactions in the past, it has.

Ghos3t a day ago

So basically you say the future of web would be everyone gets their own Jarvis, and like Tony you just tell Jarvis what you want and it does it for you, theres no need for a preexisting software or to even write a new one, it just does what's needed to fulfill the given request and give you the results you want. This sounds nice but wouldn't it get repetitive and computationally expensive, life imagine instead of Google maps, everyone just asks the AI directly for the things people typically use Google maps for like directions and location reviews etc. A centralized application like maps can be more efficient as it's optimized for commonly needed work and it can be further improved from all the data gathered from users who interact with this app, on the other hand if AI was allowed to do it's own thing, it could keep reinventing the wheel solving the same tasks again and again without the benefit of building on top of prior work, while not getting the improvements that it would get from the network effect of a large number of users interacting with the same app.

  • acomjean a day ago

    You might end up with ai trying to get information from ai, which saves us the frustration..

    knows where we’d end up?

    On the other hand the logs might be a great read.

rafaelmn 20 hours ago

We're used to dealing with human failure modes, AI fails in so unfamiliar ways it's hard to deal with.

anonzzzies a day ago

But it is still very early days. And if you have the AI generate code for deterministic things and fast execution, but the ai always monitors the code and if the user requires things that don't fit code, it will jump in. It's not one or the other necessarily.

hshdhdhehd a day ago

Determinism is the edge these systems have. Granted in theory enough AI power could be just as good. Like 1,000,000 humans could give you the answer of a postgres query. But the postgres gonna be more efficient.