Comment by voidspark
Comment by voidspark a day ago
You are confusing sentience or consciousness with intelligence.
Comment by voidspark a day ago
You are confusing sentience or consciousness with intelligence.
But that's exactly what these deep neural networks have shown, countless times. LLM's generalize on new data outside of its training set. It's called "zero shot learning" where they can solve problems that are not in their training set.
AlphaGo Zero is another example. AlphaGo Zero mastered Go from scratch, beating professional players with moves it was never trained on
> Another is the fundamental inability to self update
That's an engineering decision, not a fundamental limitation. They could engineer a solution for the model to initiate its own training sequence, if they decide to enable that.
>AlphaGo Zero mastered Go from scratch, beating professional players with moves it was never trained on
Thats all well and good, but it was tuned with enough parameters to learn via reinforcement learning[0]. I think The Register went further and got better clarification about how it worked[1]
>During training, it sits on each side of the table: two instances of the same software face off against each other. A match starts with the game's black and white stones scattered on the board, placed following a random set of moves from their starting positions. The two computer players are given the list of moves that led to the positions of the stones on the grid, and then are each told to come up with multiple chains of next moves along with estimates of the probability they will win by following through each chain.
While I also find it interesting that in both of these instances, its all referenced to as machine learning, not AI, its also important to see that even though what AlphaGo Zero did was quite awesome and a step forward in using compute for more complex tasks, it was still seeded the basics of information - the rules of Go - and simply patterned matched against itself until built up enough of a statistical model to determine the best moves to make in any given situation during a game.
Which isn't the same thing as showing generalized reasoning. It could not, then, take this information and apply it to another situation.
They did show the self reinforcement techniques worked well though, and used them for Chess and Shogi to great success as I recall, but thats a validation of the technique, not that it could generalize knowledge.
>That's an engineering decision, not a fundamental limitation
So you're saying that they can't reason about independently?
[0]: https://deepmind.google/discover/blog/alphago-zero-starting-...
[1]: https://www.theregister.com/2017/10/18/deepminds_latest_alph...
AlphaGo Zero didn't just pattern match. It invented moves that it had never been shown before. That is generalization, even if it's domain specific. Humans don't apply Go skills to cooking either.
Calling it machine learning and not AI is just semantics.
For self updating I said it's an engineering choice. You keep moving the goal posts.
This comment is such a confusion of ideas its comical.
I just made up this scenario and these words, so I'm sure it wasn't in the training data.
Kwomps can zark but they can't plimf. Ghirns are a lot like Kwomps, but better zarkers. Plyzers have the skills the Ghirns lack.
Quoning, a type of plimfing, was developed in 3985. Zhuning was developed 100 years earlier.
I have an erork that needs to be plimfed. Choose one group and one method to do it.
> Use Plyzers and do a Quoning procedure on your erork.
If that doesn't count as reasoning or generalization, I don't know what does.
https://chatgpt.com/share/67eee8dd-43a4-8004-afde-2fe3fc9630...
LOL. I tried this with Grok
https://grok.com/share/bGVnYWN5_1e6f3801-c9fb-4d33-844b-3902...
"Based on the information you provided, it seems you need something capable of plimfing to handle your erork. Kwomps can’t plimf, so they’re out. Ghirns are similar to Kwomps and excel at zarking, but there’s no indication they can plimf either. Plyzers, however, have the skills that Ghirns lack, and since plimfing is a skill Ghirns don’t have, it’s reasonable to infer Plyzers can plimf. As for the method, quoning is explicitly described as a type of plimfing, while zhuning isn’t mentioned as such—only that it was developed earlier.
So, I’d choose the Plyzers to plimf your erork using quoning. They seem capable, and quoning fits the job."
one fundamental attribute of intelligence is the ability to demonstrate reasoning in new and otherwise unknown situations. There is no system that I am currently aware of that works on data it is not trained on.
Another is the fundamental inability to self update on outdated information. It is incapable of doing that, which means it lacks another marker, which is being able to respond to changes of context effectively. Ants can do this. LLMs can't.