Comment by 0xblacklight

Comment by 0xblacklight 3 days ago

1 reply

This is an excellent point - LLMs are autoregressive next-token predictors, and output token quality is a function of input token quality

Consider that if the only code you get out of the autoregressive token prediction machine is slop, that this indicates more about the quality of your code than the quality of the autoregressive token prediction machine

acedTrex 2 days ago

> that this indicates more about the quality of your code

Considering that the "input" to these models is essentially all public code in existence, the direct context input is a drop in the bucket.