Comment by no_wizard
Comment by no_wizard 2 days ago
see the edit. boils down to the ability to generalize, LLMs can't generalize. I'm not the only one who holds this view either. Francois Chollet, a former intelligence researcher at Google also shares this view.
Are you able to formulate "generalization" in a concrete and objective way that could be achieved unambiguously, and is currently achieved by a typical human? A lot of people would say that LLMs generalize pretty well - they certainly can understand natural language sequences that are not present in their training data.