Comment by TeMPOraL

Comment by TeMPOraL a day ago

0 replies

> They aren't references to internal concepts, the model is not aware that it's doing anything so how could it "explain itself"?

I can't believe we're still going over this, few months into 2025. Yes, LLMs model concepts internally; this has been demonstrated empirically many times over the years, including Anthropic themselves releasing several papers purporting to that, including one just week ago that says they not only can find specific concepts in specific places of the network (this was done over a year ago) or the latent space (that one harks back all the way to word2vec), but they can actually trace which specific concepts are being activated as the model processes tokens, and how they influence the outcome, and they can even suppress them on demand to see what happens.

State of the art (as of a week ago) is here: https://www.anthropic.com/news/tracing-thoughts-language-mod... - it's worth a read.

> The words that are coming out of the model are generated to optimize for RLHF and closeness to the training data, that's it!

That "optimize" there is load-bearing, it's only missing "just".

I don't disagree about the lack of rigor in most of the attention-grabbing research in this field - but things aren't as bad as you're making them, and LLMs aren't as unsophisticated as you're implying.

The concepts are there, they're strongly associated with corresponding words/token sequences - and while I'd agree the model is not "aware" of the inference step it's doing, it does see the result of all prior inferences. Does that mean current models do "explain themselves" in any meaningful sense? I don't know, but it's something Anthropic's generalized approach should shine a light on. Does that mean LLMs of this kind could, in principle, "explain themselves"? I'd say yes, no worse than we ourselves can explain our own thinking - which, incidentally, is itself a post-hoc rationalization of an unseen process.