Comment by a_victorp

Comment by a_victorp a day ago

14 replies

> Human brains lack any model of intelligence. It's just neurons firing in complicated patterns in response to inputs based on what statistically leads to reproductive success

The fact that you can reason about intelligence is a counter argument to this

btilly a day ago

> The fact that you can reason about intelligence is a counter argument to this

The fact that we can provide a chain of reasoning, and we can think that it is about intelligence, doesn't mean that we were actually reasoning about intelligence. This is immediately obvious when we encounter people whose conclusions are being thrown off by well-known cognitive biases, like cognitive dissonance. They have no trouble producing volumes of text about how they came to their conclusions and why they are right. But are consistently unable to notice the actual biases that are at play.

  • Workaccount2 a day ago

    Humans think they can produce chain-of-reasoing, but it has been shown many times (and is self evident if you pay attention) that your brain is making decisions before you are aware of it.

    If I ask you to think of a movie, go ahead, think of one.....whatever movie just came into your mind was not picked by you, it was served up to you from an abyss.

    • zja a day ago

      How is that in conflict with the fact that humans can introspect?

      • vidarh 18 hours ago

        Split brain experiments shows that human "introspection" is fundamentally unreliable. The brain is trivially coaxed into explaining how it made decisions it did not make.

        We're doing the equivalent of LLM's and making up a plausible explanation for how we came to a conclusion, not reflecting reality.

        • btilly 11 hours ago

          Ah yes. See https://en.wikipedia.org/wiki/Left-brain_interpreter for more about this.

          As one neurologist put it, listening to people's explanations of how they think is entertaining, but not very informative. Virtually none of what people describe correlates in any way to what we actually know about how the brain is organized.

awongh a day ago

The ol' "I know it when I see that it thinks like me" argument.

immibis a day ago

It seems like LLMs can also reason about intelligence. Does that make them intelligent?

We don't know what intelligence is, or isn't.

  • syndeo a day ago

    It's fascinating how this discussion about intelligence bumps up against the limits of text itself. We're here, reasoning and reflecting on what makes us capable of this conversation. Yet, the very structure of our arguments, the way we question definitions or assert self-awareness, mirrors patterns that LLMs are becoming increasingly adept at replicating. How confidently can we, reading these words onscreen, distinguish genuine introspection from a sophisticated echo?

    Case in point… I didn't write that paragraph by myself.

    • Nevermark a day ago

      So you got help from a natural intelligence? No fair. (natdeo?)

      Someone needs to create a clone site of HN's format and posts, but the rules only permit synthetic intelligence comments. All models pre-prompted to read prolifically, but comment and up/down vote carefully and sparingly, to optimize the quality of discussion.

      And no looking at nat-HN comments.

      It would be very interesting to compare discussions between the sites. A human-lurker per day graph over time would also be of interest.

      Side thought: Has anyone created a Reverse-Captcha yet?

      • wyre a day ago

        This is an entertaining idea. User prompts can synthesize a users domain knowledge whether they are an entrepreneur, code dev, engineer, hacker, designer, etc and it can also have different users between different LLMs.

        I think the site would clone the upvotes of articles and the ordering of the front page, and gives directions when to comment on other’s posts.

    • throwanem a day ago

      Mistaking model for meaning is the sort of mistake I very rarely see a human make, at least in the sense as here of literally referring to map ("text"), in what ostensibly strives to be a discussion of the presence or absence of underlying territory, a concept the model gives no sign of attempting to invoke or manipulate. It's also a behavior I would expect from something capable of producing valid utterances but not of testing their soundness.

      I'm glad you didn't write that paragraph by yourself; I would be concerned on your behalf if you had.

      • fc417fc802 a day ago

        "Concerned on your behalf" seems a bit of an overstatement. Getting caught up on textual representation and failing to notice that the issue is fundamental and generalizes is indeed an error but it's not at all uncharacteristic of even fairly intelligent humans.

        • throwanem a day ago

          All else equal, I wouldn't find it cause for concern. In a discussion where being able to keep the distinction clear in mind at all times absolutely is table stakes, though? I could be fairly blamed for a sprinkle of hyperbole perhaps, but surely you see how an error that is trivial in many contexts would prove so uncommonly severe a flaw in this one, alongside which I reiterate the unusually obtuse nature of the error in this example.

          (For those no longer able to follow complex English grammar: Yeah, I exaggerate, but there is no point trying to participate in this kind of discussion if that's the sort of basic error one has to start from, and the especially weird nature of this example of the mistake also points to LLMs synthesizing the result of consciousness rather than experiencing it.)

mitthrowaway2 a day ago

No offense to johnecheck, but I'd expect an LLM to be able to raise the same counterargument.