Comment by dTal
Comment by dTal a day ago
>The fact that it was ever seriously entertained that a "chain of thought" was giving some kind of insight into the internal processes of an LLM
Was it ever seriously entertained? I thought the point was not to reveal a chain of thought, but to produce one. A single token's inference must happen in constant time. But an arbitrarily long chain of tokens can encode an arbitrarily complex chain of reasoning. An LLM is essentially a finite state machine that operates on vibes - by giving it infinite tape, you get a vibey Turing machine.
> Was it ever seriously entertained?
Yes! By Anthropic! Just a few months ago!
https://www.anthropic.com/research/alignment-faking