Comment by galaxyLogic

Comment by galaxyLogic 5 days ago

1 reply

I think what AI "should" be good at is writing code that passes unit-tests written by me the Human.

AI cannot know what we want it to write - unless we tell it exactly what we want by writing some unit-tests and tell it we want code that passes them.

But is any LLM able to do that?

warmwaffles 4 days ago

You can write the tests first and tell the AI to do the implementation and give it some guidance. I usually go the other direction though, I tell the LLM to stub the tests out and let me fill in the details.