Comment by drrotmos

Comment by drrotmos 2 days ago

7 replies

In my experience AI and Rust is a mixed bag. The strong compile-time checks mean an agent can verify its work to a much larger extent than many other languages, but the understanding of lifetimes is somewhat weak (although better in Opus 4.5 than earlier models!), and the ecosystem moves fast and fairly often makes breaking changes, meaning that a lot of the training data is obsolete.

antonvs 2 days ago

The weakness goes beyond lifetimes. In Rust programs with non-trivial type schemas, it can really struggle to get the types right. You see something similar with Haskell. Basically, proving non-trivial correctness properties globally is more difficult than just making a program work.

  • drrotmos 2 days ago

    True. I do however like the process of working with an AI more in a language like Rust. It's a lot less prone to use ugly hacks to make something that compiles but fail spectacularly at runtime - usually because it can't get the ugly hacks to compile :D

    Makes it easier to intercede to steer the AI in the right direction.

  • jmalicki 2 days ago

    How is this an issue specifically with Rust and Haskell? Do you find that LLMs have an easier time proving global correctness with C, Python, or Typescript?

    • antonvs a day ago

      Yes, because those other languages all have much weaker type systems.

      • jmalicki 16 hours ago

        Do you have examples of LLMs proving global correctness for say, C? Having worked on static analysis for both C and Rust, Rust is the easier problem because of the type system, but I am eager to be proven wrong!

  • fzzzy 2 days ago

    Luckily that's the compiler's job.

    • antonvs 2 days ago

      Yes, I was referring to writing the proofs, which is very much the human or LLM's job.