Comment by encyclopedism

Comment by encyclopedism 2 days ago

2 replies

I have commented elsewhere but this bears repeating

If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).

I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.

jeeeb 8 hours ago

Couldn’t you say the exact same about the human mind though?

  • encyclopedism 4 hours ago

    No you couldn't because the human mind definitely DOES not work like an LLM. Though how it does work is an open academic problem. As an example please see the Hard problem of consciousness. There are things when it comes to the brain/mind which we even have a difficult time in defining let alone understanding.

    To give a quick example vis-a-vis LLM's: I can reason and understand well enough without having to be 'trained' on near the entire corpus of human literary. LLM's of course do not reason or understand and their output is determined by human input. That alone indicates our minds work differently to LLM's.

    I wonder how ChatGPT would fair if it were trained on birdsong and then asked for a rhyming couplet?