Comment by narrator

Comment by narrator 2 days ago

4 replies

That lesson plan is a good practical start. I think it misses the very big picture of what we've created and the awesomeness of it.

The simplest explanation I can give is we have a machine that you feed it some text from the internet, and you turn the crank. Most machines we've had previously would stop getting better at predicting the next word after a few thousand cranks. You can crank the crank on an LLM 10^20 times and it will still get smarter. It will get so smart so quickly that no human can fit in their mind all the complexity of what it's built inside of it except through indirect methods but we know that it's getting smarter through benchmarks and some reasonably simple proofs that it can simulate any electronic circuit. We only understand how it works by the smallest increment of its intelligence improvement and through induction understanding that that should lead to further improvements in its intelligence.

mystraline 21 hours ago

> You can crank the crank on an LLM 10^20 times and it will still get smarter.

No, it won't.

Training/learning is separate from the execution of the model. Takes megadollars to train and kilodollars to effectively run.

Its basically a really complicated PID loop. You can test and 'learn' the 3 neuron function, and then you can put it into execution. Can't do both.

Sure, theres context length and fine tuning to slightly alter a model.

But theres no adaptive, self-growing LLM. Probably won't be for a long time.

mmooss a day ago

> awesomeness

They should learn to think for themselves about the whole picture, not learn about 'awesomeness'.