wizzwizz4 2 days ago

Current LLMs are not good at writing any language you actually understand, unless you do so much of the work that you might as well have written the whole program yourself.

They're excellent at doing things I'm not an expert at, though! https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect

  • galangalalgol 2 days ago

    We should make calculators like this for kids to learn on. Every so often it makes mistakes that you will spot if you could have done the arithmetic yourself and are just saving time. That is where ai code is at right now.

  • bigstrat2003 2 days ago

    This is exactly why I don't trust LLMs (and therefore why I don't use them). When dealing with something I know about I can see the many mistakes they make - I would have to be a complete fool to trust them to do better on subjects I don't know about.

m00dy 2 days ago

yeah that narrative was popular last year. You can't go wrong with LLMs on Rust.

  • morcus 2 days ago

    Maybe I'm doing it wrong (using a variety of models on GitHub Copilot) but in complex tasks I often find that they give me code that doesn't quite compile (often due to lifetime errors, sometimes other issues)

    • _alternator_ 2 days ago

      Try agents like Claude code. My experience was that the initial code was conceptually correct with some type errors on the first pass. It then iterated on compile errors about 6 times, tweaking the code to resolve the issues. Then it compiled and ran correctly.

      This was about 500 lines of working rust in about 10 minutes, approximately 25x my pace at writing rust. (I’m a bit of a beginner.)

  • pessimizer 2 days ago

    That narrative is still popular with LLMs themselves. If you ask an LLM whether it can code Rust, it will tell you that it can but not very well.

    They're good at web languages, python, and C/C++. As far as I can tell Rust works if you're already good at Rust and you can catch its screwups and strange architecture choices quickly.