Comment by lompad

Comment by lompad 2 days ago

26 replies

You implicitly assume, LLMs are actually important enough to make a difference on the geopolitical level.

So far, I haven't seen any indication that this is the case. And I'd say, hyped up speculations by people financially incentivized to hype AI should be taken with an entire mine full of salt.

ArtTimeInvestor 2 days ago

First, its not just about LLMs. Its not an LLM that replaced human drivers in Waymo cars.

Second, how could AI not be the deciding geopolitical factor of the future? You expect progress to stop and AI not to achieve and surpass human intelligence?

  • lompad 2 days ago

    >First, its not just about LLMs. Its not an LLM that replaced human drivers in Waymo cars.

    As far as I know, Waymo is still not even remotely able to operate in any kind of difficult environment, even though insane amounts of money have been poured into it. You are vastly overstating its capabilities.

    Is it cool tech? Sure. Is it safely going to replace all drivers? Doubt, very much so.

    Secondly, this only works if progress in AI does not stagnate. And, again, you have no grounds to actually make that claim. It's all built on the fanciful imagination that we're close to AGI. I disagree heavily and think, it's much further away than people profiting financially from the hype tend to claim.

    • technocrat8080 2 days ago

      Vastly overstating its capabilities? SF is ~crawling~ with them 24/7 and I've yet to meet someone who's had a bad experience in one of them. They operate more than well enough to replace rideshare drivers, and they have been.

      • dash2 2 days ago

        But SF is a single US city built on a grid. Try London or Manila.

      • Y-bar 2 days ago

        SF has pretty much the best weather there is to drive in. Try putting them on Minnesota winter roads, or muddy roads in Kansas for example.

  • Eikon 2 days ago

    > You expect progress to stop and AI not to achieve and surpass human intelligence?

    A word generator is not intelligence. There’s no “thinking” involved here.

    To surpass human intelligence, you’d first need to actually develop intelligence, and llms will not be it.

    • willvarfar 2 days ago

      I get that LLMs are just doing a probabilistic prediction etc. Its all Hutter Prize stuff.

      But how are animals with nerve-centres or brains different? What do we think us humans do differently so we are not just very big probabilistic prediction systems?

      A completely different tack: if we develop the technology to engineer animal-style nerves and form them into big lumps called 'brains', in what way is that not artificial and intelligence? And if we can do that, what is to stop that manufactured brain from not being twice or ten times larger than a humans?

      • grumbel 2 days ago

        I don't think the probabilistic prediction is a problem. The problem with current LLM is that they are limited to doing "System 1" thinking, only giving you a fast instinctive response to a question. While that works great for a lot of small problems, it completely falls apart on any larger task that requires multiple steps or backtracking. "System 2" thinking is completely missing as is the ability to just self-iterate on their own output.

        Reasoning models are trying to address that now, but monologueing in token-space still feels more like a hack than a real solution, but it does improve their performance a good bit nonetheless.

        In practical terms all this means is that current LLMs still need a hell of a lot of hand holding and fail at anything more complex, even if their "System 1" thinking is good enough for the task (e.g. they can write Tetris in 30sec no problem, but they can't write SuperMarioBros at all, since that has numerous levels that would blow the context window size).

        • fragmede 2 days ago

          give it a filesystem, like you can with Claude computer use, and you can have it make and forget memories to adapt for a limited context window size

      • sampo 2 days ago

        > But how are animals with nerve-centres or brains different?

        In current LLM neural networks, the signal proceeds in one direction, from input, through the layers, to output. To the extend that LLM's have memory and feedback loops, it's that they write the output of the process to text, and then read that text and process it again though their unidirectional calculations.

        Animal brains have circular signals and feedback loops.

        There are Recurrent Neural Network (RNN) architectures, but current LLM's are not these.

      • dkjaudyeqooe 2 days ago

        Human (and other animal) brains probably are probabilistic, but we don't understand their structure or mechanism in fine enough detail to replicate them, or simulate them.

        People think LLMs are intelligent because intelligence is latent within the text they digest, process and regurgitate. Their performance reflects this trick.

      • [removed] 2 days ago
        [deleted]
      • Eikon 2 days ago

        > But how are animals with nerve-centres or brains different? What do we think us humans do differently so we are not just very big probabilistic prediction systems?

        If you believe in free will, then we are not.

      • habinero 2 days ago

        > But how are animals with nerve-centres or brains different? What do we think us humans do differently so we are not just very big probabilistic prediction systems?

        I see this statement thrown around a lot and I don't understand why. We don't process information like computers do. We don't learn like they do, either. We have huge portions of our brains dedicated to communication and problem solving. Clearly we're not stochastic parrots.

        > if we develop the technology to engineer animal-style nerves and form them into big lumps called 'brains'

        I think y'all vastly underestimate how complex and difficult a task this is.

        It's not even "draw a circle, draw the rest of the owl", it's "draw a circle, build the rest of the Dyson sphere".

        It's easy to _say_ it, it's easy to picture it, but actually doing it? We're basically at zero.

  • ozornin 2 days ago

    > how could AI not be the deciding geopolitical factor of the future?

    Easily. Natural resources, human talent, land and supply chains all are and will be more important factors than AI

    > You expect progress to stop

    no

    > and AI not to achieve and surpass human intelligence

    yes

tankenmate 2 days ago

It's an economic benefit. It's not a panacea but it does make some tasks much cheaper.

On the other hand if the economic benefit isn't shared across the whole of society it will become a destabilising factor and hence reduce the overall economic benefit it might have otherwise borne.

fnordsensei 2 days ago

They seem popular enough that they could be leveraged to influence opinion and twist perception, as has been done with social media.

Or, as is already being done, use them to influence opinion and twist perception within tools and services that people already use, such as social media.

  • krainboltgreene 2 days ago

    So has Kendrick Lamar’s’ hit song, but no one is suggesting that it has geopolitical implications.

spacebanana7 2 days ago

The same stack is required for other AI stuff like diffusion models as well.