Comment by immibis
It seems like LLMs can also reason about intelligence. Does that make them intelligent?
We don't know what intelligence is, or isn't.
It seems like LLMs can also reason about intelligence. Does that make them intelligent?
We don't know what intelligence is, or isn't.
So you got help from a natural intelligence? No fair. (natdeo?)
Someone needs to create a clone site of HN's format and posts, but the rules only permit synthetic intelligence comments. All models pre-prompted to read prolifically, but comment and up/down vote carefully and sparingly, to optimize the quality of discussion.
And no looking at nat-HN comments.
It would be very interesting to compare discussions between the sites. A human-lurker per day graph over time would also be of interest.
Side thought: Has anyone created a Reverse-Captcha yet?
This is an entertaining idea. User prompts can synthesize a users domain knowledge whether they are an entrepreneur, code dev, engineer, hacker, designer, etc and it can also have different users between different LLMs.
I think the site would clone the upvotes of articles and the ordering of the front page, and gives directions when to comment on other’s posts.
Mistaking model for meaning is the sort of mistake I very rarely see a human make, at least in the sense as here of literally referring to map ("text"), in what ostensibly strives to be a discussion of the presence or absence of underlying territory, a concept the model gives no sign of attempting to invoke or manipulate. It's also a behavior I would expect from something capable of producing valid utterances but not of testing their soundness.
I'm glad you didn't write that paragraph by yourself; I would be concerned on your behalf if you had.
"Concerned on your behalf" seems a bit of an overstatement. Getting caught up on textual representation and failing to notice that the issue is fundamental and generalizes is indeed an error but it's not at all uncharacteristic of even fairly intelligent humans.
All else equal, I wouldn't find it cause for concern. In a discussion where being able to keep the distinction clear in mind at all times absolutely is table stakes, though? I could be fairly blamed for a sprinkle of hyperbole perhaps, but surely you see how an error that is trivial in many contexts would prove so uncommonly severe a flaw in this one, alongside which I reiterate the unusually obtuse nature of the error in this example.
(For those no longer able to follow complex English grammar: Yeah, I exaggerate, but there is no point trying to participate in this kind of discussion if that's the sort of basic error one has to start from, and the especially weird nature of this example of the mistake also points to LLMs synthesizing the result of consciousness rather than experiencing it.)
It's fascinating how this discussion about intelligence bumps up against the limits of text itself. We're here, reasoning and reflecting on what makes us capable of this conversation. Yet, the very structure of our arguments, the way we question definitions or assert self-awareness, mirrors patterns that LLMs are becoming increasingly adept at replicating. How confidently can we, reading these words onscreen, distinguish genuine introspection from a sophisticated echo?
Case in point… I didn't write that paragraph by myself.