Comment by dr_dshiv

Comment by dr_dshiv 4 days ago

14 replies

> Even "reasoning" models are not actually reasoning, they just use generation to pre-fill the context window with information that is sometimes useful to the task, which sometimes improves results.

I agree that seems weak. What would “actual reasoning” look like for you, out of curiosity?

Terr_ 4 days ago

Not parent poster, but I'd approach it as:

1. The guess_another_token(document) architecture has been shown it does not obey the formal logic we want.

2. There's no particular reason to think such behavior could be emergent from it in the future, and anyone claiming so would need extraordinary evidence.

3. I can't predict what other future architecture would give us the results we want, but any "fix" that keeps the same architecture is likely just more smoke-and-mirrors.

  • og_kalu 4 days ago

    Seems to fall apart at 1

    >1. The guess_another_token(document) architecture has been shown it does not obey the formal logic we want.

    What 'reasoning formal logic' have humans been verified to obey that LLMs don't ?

    • Terr_ 4 days ago

      ... Consider this exchange:

      Alice: "Bob, I know you're very proud about your neural network calculator app, but it keeps occasionally screwing up with false algebra results. There's no reason to think this new architecture will reliably do all the math we need."

      Bob: "How dare you! What algebra have humans been verified to always succeed-at which my program doesn't?! Huh!? HUH!?"

      ___________

      Bob's challenge, like yours, is not relevant. The (im)perfection of individual humans doesn't change the fact that the machine we built to do things for us is giving bad results.

      • og_kalu 4 days ago

        It's not irrelevant, because this is an argument about whether the machine can be said to be reasoning or not.

        If Alice had concluded that this occasional mistake NN calculator was 'not really performing algebra', then Bob would be well within his rights to ask Alice what on earth she was going on about.

cap11235 4 days ago

It's the same bitching every time an LLM post can be responded to. ITS NOT THINKING!!! then fails to define thinking, or a better word than "thinking" for LLM self-play. I consider these posts to be on par for quality with "FRIST!!!!!!" posts.

  • nucleogenesis 4 days ago

    Idk I think saying it’s “computing” is more precise because “thinking” applies to meatbags. It’s emulating thinking.

    Really I just think that anthropomorphizing LLMs is a dangerous road in many ways and really it’s mostly marketing BS anyway.

    I haven’t seen anything that shows evidence of LLMs being anything beyond a very sophisticated computer system.

  • cactusplant7374 4 days ago

    Do submarines swim? Thinking is something that doesn’t happen inside a machine. Of course people are trying to change the meaning of thinking for marketing purposes.

    • dgfitz 4 days ago

      Ironically, in the UUV space, they use the term “flying” when talking about controlling UUVs.