discreteevent a day ago

The parent is saying that when higher-level languages replaced assembly languages you only had to learn the higher level language. Once you learned the higher level language the machine did precisely what you specified and you did not have to inspect the assembly language to make sure it was compliant. Furthermore you were forced to be precise and to understand what you were doing when you were writing the higher level language.

Now you don't really have to be precise at any level to get something 'working'. You may not be familiar with the generated language or libaries but it could look good enough (like the assembly would have looked good enough). So, sure, if you are very familiar with the generated language and libraries and you inspect every line of generated code then maybe you will be ok. But often the reason you are using an LLM is because e.g. you don't understand or use bash frequently enough to get it to do what you want. Well, the LLM doesn't understand it either. So that weird bash construct that it emitted - did you read the documentation for it? You might have if you had to write it yourself.

In the end there could be code in there that nothing (machine or human) understands. The less hard-won experience you have with the target and the more time-pressed you are the more likely it is that this will occur.

ben-schaaf 16 hours ago

Exactly. If LLMs were like higher level languages you'd be committing the prompt. LLMs are actually like auto-complete, snippets, stackoverflow and rosetta code. It's not a higher level of abstraction, it's a tool for writing code.

rozap a day ago

i'm just vibing though, maybe i merge, maybe i don't, based on the vibes

  • jaggederest a day ago

    That sounds like a lot of work, better ask a LLM whether to merge.

    • genewitch a day ago

      It does the PR for me, too

      • frollogaston 15 hours ago

        And argues on behalf of you in the PR. "You're absolutely right, this should not be merged."

jll29 a day ago

Yes.

The output of the LLM is determined by the weights (parameters of the artificial neural network) estimated in the training as well as a pseudo-random number generator (unless its influence, called "temperature", is set to 0).

That means LLMs behave as "processes" rather than algorithms, unlike any code that may be generated from them, which is algorithmic (unless instrcuted otherwise; you could also tell an LLM to generate an LLM).

pjmlp 19 hours ago

The code that the compiler generates, especially in the C realm, or with dynamic compilers is also not regular, hence the tooling constraints in high integrity computing environments.