Comment by nsteel

Comment by nsteel 7 days ago

7 replies

I can't even get simple code generation to work for VHDL. It just gives me garbage that does not compile. I have to assume this is not the case for the majority of people using more popular languages? Is this because the training data for VHDL is far more limited? Are these "AIs" not able to consume the VHDL language spec and give me actual legal syntax at least?! Or is this because I'm being cheap and lazy by only trying free chatGPT and I should be using something else?

kaycey2022 6 days ago

Its all of that to some extent or the other. LLMs don't update overnight and as such lag behind innovations in major frameworks, even in web development. No matter what is said about augmenting their capabilities, their performance using techniques like RAG seem to be lacking. They don't work well with new frameworks either.

Any library that breaks backwards compatibility in major version releases will likely befuddle these models. That's why I have seen them pin dependencies to older versions, and more egregiously, default to using the same stack to generate any basic frontend code. This ignores innovations and improvements made in other frameworks.

For example, in Typescript there is now a new(ish) validation library call arktype. Gemini 2.5 pro straight up produces garbage code for this. The type generation function accepts an object/value. But gemini pro keeps insisting that it consumes a type.

So Gemini defines an optional property as `a?: string` which is similar to what you see in Typescript. But this will fail in arktype, because it needs it input as `'a?': 'string'`. Asking gemini to check again is a waste of time, and you will need enough familiarity with JS/TS to understand the error and move ahead.

Forcing development into an AI friendly paradigm seems to me a regressive move that will curb innovation in return for boosts in junior/1x engineer productivity.

  • drob518 6 days ago

    Yep, management dreams of being able to make every programmer a 10x programmer by handing them an LLM, but the 10x programmers are laughing because they know how far off the rails the LLM will go. Debugging skills are the next frontier.

  • cube00 6 days ago

    It's fun watching the AI bros try to spin justifications for building (sorry, vibing) new apps using Ruby for no reason other then the model has so much content back to 2004 to train off.

WD-42 6 days ago

They are probably really good at React. And because that ecosystem has been in a constant cycle of reinventing the wheel, they can easily pump out boilerplate code because there is just so much of it to train from.

drob518 6 days ago

The amount of training data available certainly is a big factor. If you’re programming in Python or JavaScript, I think the AIs do a lot better. I write in Clojure, so I have the same problem as you do. There is a lot less HDL code publicly available, so it doesn’t surprise me that it would struggle with VHDL. That said, from everything I’ve read, free ChatGPT doesn’t do as well on coding. OpenAI’s paid models are better. I’ve been using Anthropic’s Claude Sonnet 3.7. It’s paid but it’s very cost effective. I’m also playing around with the Gemini Pro preview.

TingPing 6 days ago

It completely fails to be helpful as a C/C++. I don’t understand the positivity around it but it must be trained on a lot of web frameworks.

  • y-curious 6 days ago

    It's very helpful for low level chores. The bane of my existence is frontend, and generating UI elements for testing backend work on the fly rocks. I like the analogy of it being a junior dev; Perhaps even an intern. You should check their work constantly and give them extremely pedantic instructions