Comment by yunwal
Exactly, it’s like getting mad at Isaac Newton because the root finding algorithm doesn’t work on particular functions or something. Like, the issue is not the algorithm, it’s that your expectations were incorrect. This is not an all-knowing machine, it’s a “what would a human say” estimation function.
Similarly, anyone who claims that LLMs in their current form are going to achieve AGI sounds like Newton bragging that he had solved all of math.