Comment by usef-
This isn't about being AGI or not, and it's not "out of thin air".
Modern implementations of LLMs can "do research" by performing searches (whose results are fed into the context), or in many code editors/plugins, the editor will index the project codebase/docs and feed relevant parts into the context.
My guess is they either were using the LLM from a code editor, or one of the many LLMs that do web searches automatically (ie. all of the popular ones).
They are answering non-stackoverflow questions every day, already.
Yeah, doing web searches could be called research but thats not what we are talking bout. Read the parent of the parent. Its about being able to answer questions thats not in its training data. People are talking about LLMs making scientific discoveries that humans haven't. A ridiculous take. Its not possible and with the current state of tech never will be. I know what LLMs are trained on. Thats not the topic of conversation.