Comment by dingnuts

Comment by dingnuts 4 days ago

26 replies

> Latent reasoning doesn't really appear until around 100B params.

Please provide a citation for wild claims like this. Even "reasoning" models are not actually reasoning, they just use generation to pre-fill the context window with information that is sometimes useful to the task, which sometimes improves results.

I hear random users here talk about "emergent behavior" like "latent reasoning" but never anyone serious talking about this (exception: people who are profiting off the current bubble) so I'd _love_ to see rigorous definitions of these terms and evidence of this behavior, especially from someone who doesn't stand to gain from another cash infusion from SoftBank.

I suspect these things don't exist. At the very most, they're a mirage, and exist in the way a rainbow does. Go on and try to find that pot of gold, eh?

criemen 4 days ago

> Please provide a citation for wild claims like this. Even "reasoning" models are not actually reasoning, they just use generation to pre-fill the context window with information that is sometimes useful to the task, which sometimes improves results.

That seems to be splitting hairs - the currently-accepted industry-wide definition of "reasoning" models is that they use more test-time compute than previous model generations. Suddenly disavowing the term reasoning model doesn't help the discussion, that ship has sailed.

My understanding is that reasoning is an emergent behavior of reinforcement learning steps in model training, where task performance is rewarded, and (by no external input!) the model output starts to include phrases ala "Wait, let me think". Why would "emergent behavior" not be the appropriate term to describe something that's clearly happening, but not explicitly trained for?

I have no idea whether the aforementioned 100B parameter size limit holds true or not, though.

  • xandrius 4 days ago

    Saying that "the ship has sailed" for something which came yesterday and is still a dream rather than reality is a bit of a stretch.

    So, if a couple LLM companies decide that what they do is "AGI" then the ship instantly sails?

    • noir_lord 4 days ago

      Only matters if they can convince others that what they do is AGI.

      As always ignore the man behind the curtain.

      • jijijijij 4 days ago

        Just like esoteric appropriation of 'quantum entanglement', right? It's vibe semantics now.

  • habinero 4 days ago

    > currently-accepted industry-wide definition of "reasoning"

    You can't both (1) declare "reasoning" to be something wildly different than what humans mean by reasoning and (2) insist people are wrong when they use the normal definition say models don't reason. You gotta pick a lane.

    • cowboylowrez 4 days ago

      I don't think its too problematic, its hard to say something is "reasoning" without saying what that something is, for another example of terms that adjust their meaning to context for example, the word "cache" in "processor cache", we know what that is because its in the context of a processor, then there's "cache me outside", which comes from some tv episode.

      • whatevertrevor 4 days ago

        It's a tough line to tread.

        Arguably, a lot of unending discourse about the "abilities" of these models stems from using ill-defined terms like reasoning and intelligence to describe these systems.

        On the one hand, I see the point that we really struggle to define intelligence, consciousness etc for humans, so it's hard to categorically claim that these models aren't thinking, reasoning or have some sort of intelligence.

        On the other, it's also transparent that a lot of the words are chosen somewhat deliberately to anthropomorphize the capabilities of these systems for pure marketing purposes. So the claimant needs to demonstrate something beyond rebutting with "Well the term is ill-defined, so my claims are valid."

        And I'd even argue the marketers have won overall: by refocusing the conversation on intelligence and reasoning, the more important conversation about the factually verifiable capabilities of the system gets lost in a cycle of circular debate over semantics.

        • cowboylowrez 4 days ago

          sure, but maybe the terms intelligence and reasoning aren't that bad when describing what human behavior we want these systems to replace or simulate. I'd also argue that while we struggle to define what these terms actually mean, we struggle less about remembering what these terms represent when using them.

          I'd even argue that its appropriate to use these terms because machine intelligence kinda sorta looks and acts like human intelligence, and machine reasoning models kinda sorta look like how a human brain reasons about things, or infer consequences of assertions, "it follows that", etc.

          Like computer viruses, we call them viruses because they kinda sorta behave like a simplistic idea of how biological viruses work.

          > currently-accepted industry-wide definition of "reasoning"

          The currently-accepted industry-wide definition of reasoning will probably only apply to whatever industry we're describing, ie., are we talking human built machines, or the biological brain activity we kinda sorta model these machines on?

          marketting can do what they want I got no control over either the behavior of marketters or their effect on their human targets.

    • quinndexter 4 days ago

      Or you could accept that sometimes fields contain terms-of-art that are non-intuitive to outsiders. Go ask an astromer what their working definition of a metal is.

      • habinero 3 days ago

        No. This is the equivalent of an astronomer telling a blacksmith they're using the term "metal" incorrectly. Your jargon does not override everyone else's language.

dr_dshiv 4 days ago

> Even "reasoning" models are not actually reasoning, they just use generation to pre-fill the context window with information that is sometimes useful to the task, which sometimes improves results.

I agree that seems weak. What would “actual reasoning” look like for you, out of curiosity?

  • Terr_ 4 days ago

    Not parent poster, but I'd approach it as:

    1. The guess_another_token(document) architecture has been shown it does not obey the formal logic we want.

    2. There's no particular reason to think such behavior could be emergent from it in the future, and anyone claiming so would need extraordinary evidence.

    3. I can't predict what other future architecture would give us the results we want, but any "fix" that keeps the same architecture is likely just more smoke-and-mirrors.

    • og_kalu 4 days ago

      Seems to fall apart at 1

      >1. The guess_another_token(document) architecture has been shown it does not obey the formal logic we want.

      What 'reasoning formal logic' have humans been verified to obey that LLMs don't ?

      • Terr_ 4 days ago

        ... Consider this exchange:

        Alice: "Bob, I know you're very proud about your neural network calculator app, but it keeps occasionally screwing up with false algebra results. There's no reason to think this new architecture will reliably do all the math we need."

        Bob: "How dare you! What algebra have humans been verified to always succeed-at which my program doesn't?! Huh!? HUH!?"

        ___________

        Bob's challenge, like yours, is not relevant. The (im)perfection of individual humans doesn't change the fact that the machine we built to do things for us is giving bad results.

  • cap11235 4 days ago

    It's the same bitching every time an LLM post can be responded to. ITS NOT THINKING!!! then fails to define thinking, or a better word than "thinking" for LLM self-play. I consider these posts to be on par for quality with "FRIST!!!!!!" posts.

    • nucleogenesis 4 days ago

      Idk I think saying it’s “computing” is more precise because “thinking” applies to meatbags. It’s emulating thinking.

      Really I just think that anthropomorphizing LLMs is a dangerous road in many ways and really it’s mostly marketing BS anyway.

      I haven’t seen anything that shows evidence of LLMs being anything beyond a very sophisticated computer system.

    • cactusplant7374 4 days ago

      Do submarines swim? Thinking is something that doesn’t happen inside a machine. Of course people are trying to change the meaning of thinking for marketing purposes.

      • dgfitz 4 days ago

        Ironically, in the UUV space, they use the term “flying” when talking about controlling UUVs.