Comment by aeve890
>The ways LLMs fail (and the techniques you have to use to account for that) have more in common than physical engineering disciplines than software engineering does!
Ah yes, the God given free parameters in the Standard Model, including obviously the random seed of a transformer. What if just put 0 in the inference temperature? The randomness in llms is a technical choice to generate variations in the selection of the next token. Physical engineering? Come on.
>just set temp to 0 to make LLMs deterministic
Does that really work? And is it affected by the almost continuous silent model updates? And gpt-5 has a "hidden" system prompt, even thru the API, which seemed to undergo several changes since launch...