kennywinker 13 hours ago

If this was a good answer to mobility, people would prefer the bus over their car. It’s non-deterministic - when will it come? How quick will i get there? Will i get to sit? And it’s operated by an intelligent agent (driver).

Every reason people prefer a car or bike over the bus is a reason non-deterministic agents are a bad interface.

And that analogy works as a glimpse into the future - we’re looking at a fast approaching world where LLMs are the interface to everything for most of us - except for the wealthy, who have access to more deterministic services or actual human agents. How long before the rich person car rental service is the only one with staff at the desk, and the cheaper options are all LLM based agents? Poor people ride the bus, rich people get to drive.

  • aryehof 7 hours ago

    Bus vs car hit home for me as a great example of non vs deterministic.

    It has always seemed to me that workflow or processes need to be deterministic and not decided by an LLM.

63stack a day ago

Most of us actually want to get somewhere to do an activity. The getting there is just a modality.

  • jermaustin1 a day ago

    Most of us actually want to get some where to do an activity to enjoy ourselves. The getting there, and activity, are just modalities.

    • tags2k a day ago

      Most of us actually want to get somewhere to do an activity to then have known we did it for the rest of our lives as if to extract some intangible pleasure from its memory. Why don't we just hallucinate that we did it?

      • shswkna a day ago

        This leads to us asking the deepest question of all: What is the point of our existence. Or as someone suggests lower down, in our current form all needs could ultimately be satisfied if AI just provided us with the right chemicals. (Which drug addicts already understand)

        This can be answered though, albeit imperfectly. On a more reductionist level, we are the cosmos experiencing itself. Now there are many ways to approach this. But just providing us with the right chemicals to feel pleasure/satisfaction is a step backwards. All the evolution of a human being, just to end up functionally like an amoeba or a bacteria.

        So we need to retrace our steps backwards in this thought process.

        I could write a long essay on this.

        But, to exist in first place, and to keep existing against all the constraints of the universe, is already pretty fucking amazing.

        Whether we do all the things we do, just in order to stay alive and keep existing, or if the point is to be the cosmos “experiencing itself”, is pretty much two sides of the same coin.

      • 63stack 3 hours ago

        This was actually my point as well. You can follow this thought process all the way up to "make those specific neuron pathways in my brain fire", everything else is just the getting there part.

GTP a day ago

But I want that somewhere to be deterministic, i.e. I want to arrive to the place I choose. With this kind of non-determinism instead, I have a big chance of getting to the place I choose. But I will also every now and then end up in a different place.

113 2 days ago

Yeah but in this case your car is non-deterministic so

  • mikodin a day ago

    Well the need is to arrive where you are going.

    If we were in an imagined world and you are headed to work

    You either walk out your door and there is a self driving car, or you walk out of your door and there is a train waiting for you or you walk out of your door and there is a helicopter or you walk out of your door and there is a literal worm hole.

    Let's say all take the same amount of time, are equally safe, same cost, have the same amenities inside, and "feel the same" - would you care if it were different every day?

    I don't think I would.

    Maybe the wormhole causes slight nausea ;)

    • didericis a day ago

      > Well the need is to arrive where you are going.

      In order to get to your destination, you need to explain where you want to go. Whatever you call that “imperative language”, in order to actually get the thing you want, you have to explain it. That’s an unavoidable aspect of interacting with anything that responds to commands, computer or not.

      If the AI misunderstands those instructions and takes you to a slightly different place than you want to go, that’s a huge problem. But it’s bound to happen if you’re writing machine instructions in a natural language like English and in an environment where the same instructions aren’t consistently or deterministically interpreted. It’s even more likely if the destination or task is particularly difficult/complex to explain at the desired level of detail.

      There’s a certain irreducible level of complexity involved in directing and translating a user’s intent into machine output simply and reliably that people keep trying to “solve”, but the issue keeps reasserting itself generation after generation. COBOL was “plain english” and people assumed it would make interacting with computers like giving instructions to another employee over half a century ago.

      The primary difficulty is not the language used to articulate intent, the primary difficulty is articulating intent.

      • simianwords a day ago

        this is a weak argument.. i use normal taxis and ask the driver to take me to a place in natural language - a process which is certainly non deterministic.

    • hyperadvanced a day ago

      I think it’s pretty obvious but most people would prefer a regular schedule not a random and potentially psychologically jarring transportation event to start the day.

  • chii a day ago

    > your car is non-deterministic

    it's not as far as your experience goes - you press pedal, it accelerates. You turn the steering, it goes the way it turns. What the car does is deterministic.

    More importantly, it does this every time, and the amount of turning (or accelerating) is the same today as it was yesterday.

    If an LLM interpreted those inputs, can you say with confidence, that you will accelerate in a way that you predicted? If that is the case, then i would be fine with an LLM interpreted input to drive. Otherwise, how do you know, for sure, that pressing the brakes will stop the car, before you hit somebody in front of you?

    of course, you could argue that the input is no longer your moving the brake pads etc - just name a destination and you get there, and that is suppose to be deterministic, as long as you describe your destination correctly. But is that where LLM is at today? or is that the imagined future of LLMs?

    • iliaxj 12 hours ago

      Sometimes it doesn't though. Sometimes the engine seizes because a piece of tubing broke and you left your coolant down the road two turns ago. Or you steer off a cliff because there was coolant on the road for some reason. Or the meat sack in front of the wheel just didn't get enough sleep and your response time is degraded and you just can't quite get the thing to feel how you usually do. Ultimately the failure rate is low enough to trust your life on it, but that's just a matter of degree.

      • pepoluan 12 hours ago

        The situations you described reflects a System that has changed. And if the System has changed, then a change in output is to be expected.

        It's the same as having a function called "factorial" but you change the multiplication operation to addition instead.

      • chii 11 hours ago

        all of those situations are the "driver's own fault", because they could've had a check to ensure none of that happened before driving. Not true with an LLM (at least, not as of today).

    • crote a day ago

      Tesla's "self-driving" cars have been working very hard to change this. That piece of road it has been doing flawlessly for months? You're going straight into the barrier today, just because it feels like it.

  • nurettin a day ago

    I mean, as long as it works and it is still technically "my car", I would welcome the change.