Comment by sreekanth850

Comment by sreekanth850 5 days ago

6 replies

I was wondering on what basis @Sama keeps saying they are near AGI, when in reality LLMs just calculate sequences and probabilities. I really doubt this bubble is going to burst soon.

jjtheblunt 5 days ago

I'm unaware of any proof (in the mathematician sense, for example) that _we_ aren't just kickass machines calculating sequences at varying probabilities, though.

perhaps that is how the argument persists?

  • lispybanana 3 days ago

    Humans do this but this is not all they do. How do we explain humans who invent new concepts, new words, new numerical systems, new financial structures, new legal theories. These are not exactly predictions (since they don't exist in a training set) but they may be composed from such sets.

    • in-silico 3 days ago

      > How do we explain humans who invent new concepts

      Simple: they are hallucinations that turn out to be correct or useful.

      Ask ChatGPT to create a million new concepts that weren't in its training data and some of them are bound to be similarly correct or useful. The only difference is that humans have hands and eyes to test their new ideas.

  • sreekanth850 4 days ago

    Efficiency matters. We do it with a fraction of the processing power.

    • jjtheblunt 4 days ago

      true in the caloric/watts sense, but we might well have way higher computational power architecturally?

      • sreekanth850 3 days ago

        100%, we do a lot more in real life. There are many circumstances, where you work without prior training. This is how new inventions happening everytime.