Comment by themafia
> and rewording it
Using the probabilities encoded in the training data.
> In that sense they are not compressing the data
You're right. In this case they're decompressing it.
> and rewording it
Using the probabilities encoded in the training data.
> In that sense they are not compressing the data
You're right. In this case they're decompressing it.
It feels like you're being pedantic, to defend your original claim which was inaccurate.
the LLM here is only "using the probability encoded in the training data" to know that after "Yes, it does" it should output the token "!"However, it is not "decompressing" its "training data" to write
It is just getting this from the data provided at run-time in the prompt, not from training data.