Comment by ipnon
I really struggle to feel the AGI when I read such things. I understand this is all of year old. And that we have superhuman results in mathematics, basic science, game playing, and other well-defined fields. But why is it difficult to impossible for LLMs to intuit and deeply comprehend what it is we are trying to coax from them?
> But why is it difficult to impossible for LLMs to intuit and deeply comprehend what it is we are trying to coax from them?
It's right there in the name. Large language models model language and predict tokens. They are not trained to deeply comprehend, as we don't really know how to do that.