throwup238 3 days ago

More like the universal approximation theorem extended to computation rather than network complexity: https://en.wikipedia.org/wiki/Universal_approximation_theore...

  • immibis 3 days ago

    The universal approximation theorem is good to know because says there's no theoretical upper bound to a function-approximating NN's accuracy. In practice it says nothing about what can be realistically achieved, though.

nopinsight 3 days ago

A key difference is that the way LMMs (Large Multimodal Models) generate output is far from random. These models can imitate/blend existing information or imitate/probably blend known reasoning methods in the training data. The latter is a key distinguishing feature of the new OpenAI o1 models.

Thus, the signal-to-noise ratio of their output is generally way better than infinite monkeys.

Arguably, humans rely on similar modes of "thinking" most of the time as well.

CamperBob2 3 days ago

Yeah. Monkeys. Monkeys that write useful C and Python code that needs a bit less revision every time there's a model update.

Can we just give the "stochastic parrot" and "monkeys with typewriters" schtick a rest? It made for novel commentary three or four years ago, but at this point, these posts themselves read like the work of parrots. They are no longer interesting, insightful, or (for that matter) true.

  • visarga 3 days ago

    If you think about it, humans necessarily use abstractions, from the edge detectors in retina to concepts like democracy. But do we really understand? All abstractions leak, and nobody knows the whole stack. For all the poorly grasped abstractions we are using, we are also just parroting. How many times are we doing things because "that is how they are done" never wondering why?

    Take ML itself, people are saying it's little more than alchemy (stir the pile). Are we just parroting approaches that have worked in practice without real understanding? Is it possible to have centralized understanding, even in principle, or is all understanding distributed among us? My conclusion is that we have a patchwork of partial understanding, stitched together functionally by abstractions. When I go to the doctor, I don't study medicine first, I trust the doctor. Trust takes the place of genuine understanding.

    So humans, like AI, use distributed and functional understanding, we don't have genuine understanding as meant by philosophers like Searle in the Chinese Room. No single neuron in the brain understands anything, but together they do. Similarly, no single human understands genuinely, but society together manages to function. There is no homunculus, no centralized understander anywhere. We humans are also stochastic parrots of abstractions we don't really grok to the full extent.

    • throwaway290 3 days ago

      > My conclusion

      Are you saying you understood something? Was it genuine? Do you think LLM feels the same thing?

    • kaechle 3 days ago

      Great points. We're pattern-matching shortcut machines, without a doubt. In most contexts, not even good ones.

      > When I go to the doctor, I don't study medicine first, I trust the doctor. Trust takes the place of genuine understanding.

      The ultimate abstraction! Trust is highly irrational by definition. But we do it all day every day, lest we be classified as psychologically unfit for society. Which is to say, mental health is predicated on a not-insignificant amount of rationalizations and self-deceptions. Hallucinations, even.

  • kaechle 3 days ago

    Every time I read "stochastic parrot," my always-deterministic human brain surfaces this quote:

    > “Most people are other people. Their thoughts are someone else's opinions, their lives a mimicry, their passions a quotation.”

    - Oscar Wilde, a great ape with a pen

    • OKRainbowKid 3 days ago

      Reading this quote makes me wonder why I should believe that I am somehow special or different, and not just another "other".

      • HeatrayEnjoyer 3 days ago

        That's just it. We're not unique. We've always been animals running on instinct in reaction to our environment. Our instincts are more complex than other animals but they are not special and they are replicable.

  • ffsm8 3 days ago

    > novel commentary three or four years ago,

    Chatgpt was released November 2022. That's one year and 10 months ago. Their marketing started in the summer of the same year, still far of from 3-4 years.

    • Banou 3 days ago

      But chatgpt wasnt the first, openai had coding playground with gpt2, and you could already code even before that, around 2020 already, so I'd say it has been 3-4years

    • killerstorm 3 days ago

      GPT-3 paper announcement got 200 comments on HN back in 2020.

      It doesn't matter when marketing started, people were already discussing it in 2019-2020.

      Stochastic parrot: The term was coined by Emily M. Bender[2][3] in the 2021 artificial intelligence research paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? " by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell.[4]

      • ffsm8 3 days ago

        Confusing to read your comment. So the term was coined 3 yrs ago, but it's been 4 years out of date? Seems legit

        It could be that the term no longer applies, but there is no way you could honestly make that claim pre gpt4, and that's not 3-4yrs ago

    • [removed] 3 days ago
      [deleted]
  • hegFdH 3 days ago

    The infinite monkey post was in response to this claim, which, like the universal approximation theorem, is useless in practice:

    "We have mathematically proven that transformers can solve any problem, provided they are allowed to generate as many intermediate reasoning tokens as needed. Remarkably, constant depth is sufficient."

    Like an LLM, you omit the context and browbeat people with the "truth" you want to propagate. Together with many political forbidden terms since 2020, let us now also ban "stochastic parrot" in order to have a goodbellyfeel newspeak.

    • chaosist 3 days ago

      There is also a problem of "stochastic parrot" being constantly used in a pejorative sense as opposed to a neutral term to keep grounded and skeptical.

      Of course, it is an overly broad stroke that doesn't quite capture all the nuance of the model but the alternative of "come on guys, just admit the model is thinking" is much worse and has much less to do with reality.

  • 93po 3 days ago

    AI news article comments bingo card:

    * Tired ClosedAI joke

    * Claiming it's predictive text engine that isn't useful for anything

    * Safety regulations are either good or bad, depending on who's proposing them

    * Fear mongering about climate impact

    * Bringing up Elon for no reason

    * AI will never be able to [some pretty achievable task]

    * Tired arguments from pro-IP / copyright sympathizers

    • kmeisthax 3 days ago

      > Tired ClosedAI joke

      > Tired arguments from pro-IP / copyright sympathizers

      You forgot "Tired ClosedAI joke from anti-IP / copyleft sympathizers".

      Remember that the training data debate is orthogonal to the broader debate over copyright ownership and scope. The first people to start complaining about stolen training data were the Free Software people, who wanted a legal hook to compel OpenAI and GitHub to publish model weights sourced from GPL code. Freelance artists took that complaint and ran with it. And while this is technically an argument that rests on copyright for legitimacy; the people who actually own most of the copyrights - publishers - are strangely interested about these machines that steal vast amounts of their work.

    • larodi 3 days ago

      Interestingly there should be one which is missing which is well appropriate unless everyone is super smart math professor level genius:

      These papers become increasingly difficult to properly comprehend.

      …and thus perhaps the plethora of arguably nonsensical follow ups.

      • CamperBob2 3 days ago

        These papers become increasingly difficult to properly comprehend.

        Feed it to ChatGPT and ask for an explanation suited to your current level of understanding (5-year-old, high-school, undergrad, comp-sci grad student, and so on.)

        No, really. Try it.

    • aurareturn 3 days ago

      >* Claiming it's predictive text engine that isn't useful for anything

      This one is very common on HN and it's baffling. Even if it's predictive text, who the hell cares if it achieves its goals? If an LLM is actually a bunch of dolphins typing on a keyboard made for dolphins, I could care less if it does what I need it to do. For people who continue to repeat this on HN, why? I just want to know out of my curiosity.

      >* AI will never be able to [some pretty achievable task]

      Also very common on HN.

      You forgot the "AI will never be able to do what a human can do in the exact way a human does it so AI will never achieve x".

      • HarHarVeryFunny 3 days ago

        > Even if it's predictive text, who the hell cares if it achieves its goals?

        Haha ... well in the literal sense it does achieve "its" goals, since it only had one goal which was to minimize its training loss. Mission accomplished!

        OTOH, if you mean achieving the user's goals, then it rather depends on what those goals are. If the goal is to save you typing when coding, even if you need to check it all yourself anyway, then I guess mission accomplished there too!

        Whoopee! AGI done! Thanks you Dolphins!

      • peterhadlaw 3 days ago

        I think it's less about what it is, but what it claims to be. "Artificial Intelligence"... It's not. Dolphin keyboard squad (DKS), then sure.

        The "just fancy autocomplete" is in response, but a criticism