Comment by throawayonthe
Comment by throawayonthe 2 days ago
i don't use LLMs, but i've heard people complain current LLMs are not good at writing Rust
Comment by throawayonthe 2 days ago
i don't use LLMs, but i've heard people complain current LLMs are not good at writing Rust
We should make calculators like this for kids to learn on. Every so often it makes mistakes that you will spot if you could have done the arithmetic yourself and are just saving time. That is where ai code is at right now.
This is exactly why I don't trust LLMs (and therefore why I don't use them). When dealing with something I know about I can see the many mistakes they make - I would have to be a complete fool to trust them to do better on subjects I don't know about.
Try agents like Claude code. My experience was that the initial code was conceptually correct with some type errors on the first pass. It then iterated on compile errors about 6 times, tweaking the code to resolve the issues. Then it compiled and ran correctly.
This was about 500 lines of working rust in about 10 minutes, approximately 25x my pace at writing rust. (I’m a bit of a beginner.)
That narrative is still popular with LLMs themselves. If you ask an LLM whether it can code Rust, it will tell you that it can but not very well.
They're good at web languages, python, and C/C++. As far as I can tell Rust works if you're already good at Rust and you can catch its screwups and strange architecture choices quickly.
Current LLMs are not good at writing any language you actually understand, unless you do so much of the work that you might as well have written the whole program yourself.
They're excellent at doing things I'm not an expert at, though! https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect