Comment by ACCount37
Do you understand just how much copied human behavior goes into that?
An LLM has to predict entire conversations with dozens of users, where each user has his own behaviors, beliefs and more. That's the kind of thing pre-training forces it to do.
LLMs aren't actually able to do that though, are they? They are simply incapable of keeping track of consistent behaviors and beliefs. I recognize that for certain prompts an LLM has to do it. But as long as we're using transformers, it'll never actually work.