Comment by Closi

Comment by Closi 3 hours ago

6 replies

Just comes down to your own view of what AGI is, as it's not particularly well defined.

While a bit 'time-machiney' - I think if you took an LLM of today and showed it to someone 20 years ago, most people would probably say AGI has been achieved. If someone wrote a definition of AGI 20 years ago, we would probably have met that.

We have certainly blasted past some science-fiction examples of AI like Agnes from The Twilight Zone, which 20 years ago looked a bit silly, and now looks like a remarkable prediction of LLMs.

By todays definition of AGI we haven't met it yet, but eventually it comes down to 'I know it if I see it' - the problem with this definition is that it is polluted by what people have already seen.

nottorp 44 minutes ago

> most people would probably say AGI has been achieved

Most people who took a look at a carefully crafted demo. I.e. the CEOs who keep pouring money down this hole.

If you actually use it you'll realize it's a tool, and not a particularly dependable tool unless you want to code what amounts to the React tutorial.

sixtyj 42 minutes ago

Charles Stross published Accelerando in 2005.

The book is a collection of nine short stories telling the tale of three generations of a family before, during, and after a technological singularity.

bananaflag 2 hours ago

> If someone wrote a definition of AGI 20 years ago, we would probably have met that.

No, as long as people can do work that a robot cannot do, we don't have AGI. That was always, if not the definition, at least implied by the definition.

I don't know why the meme of AGI being not well defined has had such success over the past few years.

  • Closi 2 hours ago

    Completely disagree - Your definition (in my opinion) is more aligned to the concept of Artificial Super Intelligence.

    Surely the 'General Intelligence' definition has to be consistent between 'Artificial General Intelligence' and 'Human General Intelligence', and humans can be generally intelligent even if they can't solve calculus equations or protein folding problems. My definition of general intelligence is much lower than most - I think a dog is probably generally intelligent, although obviously in a different way (dogs are obviously better at learning how to run and catch a ball, and worse at programming python).

    • fc417fc802 17 minutes ago

      I do consider dogs to have "general intelligence" however despite that I have always (my entire life) considered AGI to imply human level intelligence. Not better, not worse, just human level.

      It gets worse though. While one could claim that scoring equivalently on some benchmark indicates performance at the same level - and I'd likely agree - that's not what I take AGI to mean. Rather I take it to mean "equivalent to a human" so if it utterly fails at something we're good at such as driving a car through a construction zone during rush hour then I don't consider it to have met the bar of AGI even if it meets or exceeds us at other unrelated tasks. You have to be at least as general as a stock human to qualify as AGI in my books.

      Now I may be but a single datapoint but I think there are a lot of people out there who feel similarly. You can see this a lot in popular culture with AGI (or often AI) being used to refer to autonomous humanoid robots portrayed as operating at or above a human level.

      Related to all that, since you mention protein folding. I consider that to be a form of super intelligence as it is more or less inconceivable that an unaided human would ever be able to accomplish such a feat. So I consider alphafold to be both super intelligent and decidedly _not_ AGI. Make of that what you will.