Comment by ben_w

Comment by ben_w 2 months ago

2 replies

1. There's a lot of superhuman AI out there already, just in narrow domains like protein folding, chess, and so on.

2. Doctorate (and postdoc, and professoral) level responses were already in the training sets.

3. Aim of current AI research is to create a model of the underlying reality which produces the observed signals — what must a PhD candidate have observed for them to write that particular paper, etc.

I have no idea if these AI efforts will succeed or not, hence no idea where we are on the S-curve. But that's the goal.

DiscourseFan 2 months ago

1. Protein folding is not AGI, and its a technology (much like, say, telecommunications), which no human on their own would be able to perform or even expected to.

3. How can an AI (in a general sense) create a model of underlying reality if the humans who create them do not have access to underlying reality but only the forms of its appearance?

  • ben_w 2 months ago

    1. I didn't say AGI and neither did the person I replied to.

    They said AI, I said AI.

    But protein folding is also very much a thing humans did by playing games. Literally: https://en.wikipedia.org/wiki/Foldit

    And Transformers are able to learn to use tools, so even mediocre ones can write and then invoke specialist AI.

    3. To reach the quality level I specified, that of "You need a PhD to compete with this", it is only necessary to program the AI to be as capable of learning as a PhD student.

    This is the standard AI researchers are aiming for, and what I was writing of.

    Can they do that? Dunno, and I hope not. But I've bought some tech shares just in case they actually can, because I doubt I'll be able to keep up if that's the near future.