Comment by perrygeo
> Large language models (LLMs) are powerful but static; they lack mechanisms to adapt their weights in response to new tasks
The learning and inference process are entirely separate, which is very confusing to people familiar with traditional notions of human intelligence. For humans, learning things and applying that knowledge in the real world is one integrated feedback process. Not so with LLMs, we train them, deploy them, and discard them for a new model that has "learned" slightly more. For an LLM, inference is the end of learning.
Probably the biggest misconception out there about AI. If you think LLMs are learning, it's easy to fantasize that AGI is right around the corner.
Reinforcement learning can be used to refine LLM as shown by Deepseek.