Comment by visarga
> My tolerance for AI style "knowledge" is lower and lower every day.
We like to think humans possess genuine knowledge while AI only learns patterns. But in reality do we learn medicine before going to the doctor? or do we engage the process in an abstract way - "I tell my symptoms, the doctor gives me a diagnosis and treatment". I think what we have is leaky abstractions, not genuine knowledge. Even the doctor did not discover all his knowledge directly, they trust other doctors who came before them.
When using a phone or any complex system, do we genuinely understand it? We don't genuinely understand even a piece of code we wrote, we still have bugs and edge cases we find out years later. So my point is that we have functional knowledge, leaky abstractions open for revision, not Knowledge.
And LLMs are no different. They just lack our rich instant feedback loop, and continual learning. But that is just a technical detail not a fundamental problem. When a LLM has an environment, like AlphaProof used LEAN, then it can rival us, they can make genuinely new discoveries. It's a matter of search, not of biology. AlphaZero's move 37 is another example.
But isn't it surprising how much LLMs can do with just text and not having any of their own experiences, except RLHF style? If language can do so much work on its own, without biology, embodiment and personal experience, what does it say about us? Are we a kind of embodied VLMs?