Comment by talkingtab
Comment by talkingtab 2 days ago
The topic is of great interest to me, but the approach throws me off. If we have learned one thing from AI, it is the primal difference between knowing about something and being able to do something. [With extreme gentleness, we humans call it hallucination when an AI demonstrates this failing.]
The question I increasingly pose to myself and others, is which kind of knowledge is at hand here? And in particular, can I use this to actually build something?
If one attempted to build a conscious machine, the very first question I would ask, is what does conscious mean? I reason about myself so that means I am conscious, correct? But that reasoning is not a singularity. It is a fairly large number of neurons collaborating. An interesting question - for another tine - is then is whether a singular entity can in fact be conscious? But we do know that complex adaptive systems can be conscious because we are.
So step 1 in building a conscious machine could be to look at some examples of constructed complex adaptive systems. I know of one, which is the RIP routing protocol (now extinct? RIP?). I would bet my _money_ that one could find other examples of artificial CAS pretty easily.
[NOTE: My tolerance for AI style "knowledge" is lower and lower every day. I realize that as a result this may come off as snarky and apologize. There are some possibly good ideas for building conscious machines in the article, but I could not find them. I cannot find the answer to a builders question "how would I use this", but perhaps that is just a flaw in me.]
> My tolerance for AI style "knowledge" is lower and lower every day.
We like to think humans possess genuine knowledge while AI only learns patterns. But in reality do we learn medicine before going to the doctor? or do we engage the process in an abstract way - "I tell my symptoms, the doctor gives me a diagnosis and treatment". I think what we have is leaky abstractions, not genuine knowledge. Even the doctor did not discover all his knowledge directly, they trust other doctors who came before them.
When using a phone or any complex system, do we genuinely understand it? We don't genuinely understand even a piece of code we wrote, we still have bugs and edge cases we find out years later. So my point is that we have functional knowledge, leaky abstractions open for revision, not Knowledge.
And LLMs are no different. They just lack our rich instant feedback loop, and continual learning. But that is just a technical detail not a fundamental problem. When a LLM has an environment, like AlphaProof used LEAN, then it can rival us, they can make genuinely new discoveries. It's a matter of search, not of biology. AlphaZero's move 37 is another example.
But isn't it surprising how much LLMs can do with just text and not having any of their own experiences, except RLHF style? If language can do so much work on its own, without biology, embodiment and personal experience, what does it say about us? Are we a kind of embodied VLMs?