Comment by roenxi
Comment by roenxi 2 days ago
Has anyone come up with a definition of AGI where humans are near-universally capable of GI? These articles seem to be slowly pushing the boundaries past the point where slower humans are disbarred from intelligence.
Many years ago I bumped in to Towers of Hanoi in a computer game and failed to solve it algorithmicly, so I suppose I'm lucky I only work a knowledge job rather than an intelligence-based one.
People confuse performance and internal presentation.
A simple calculator is vastly better as adding numbers than any human. An chess engine will rival any human grand master. No one would say that this got us closer to AGI.
We could absolutely see LLMs that produce poetry that humans can not tell apart or even prefer to human made poetry. We could have LLMs that are perfectly able to convince humans that they have consciousness and emotions.
Would we have have achieved AGI then? Does that mean those LLMs have gotten consciousness and emotions? No.
The question of consciousness is based on what is going on in the inside, how the reasoning happening and not the output. In fact the first AGI might perform significantly worse in most tasks that current LLMs.
LLMs are extremely impressive but they are not thinking. They do not have consciousness. It might be technically impossible for them to develop anything like that or at least it would require significantly bigger models.
> where slower humans are disbarred from intelligence
Humans have value for being humans. Whether they are slow or fast at thinking. Whether they are neurodivergent or neurotypical. We all have feelings, we are all capable of suffering, we are all alive.
See also the problems with AI Welfare research: https://substack.com/home/post/p-165615548