hn_throw2025 6 days ago

That wasn’t really the point I was getting at, but as you asked… The reading doesn’t involve much more than a cursory (no pun intended) glance, and I didn’t test more than I would have tested something I had written manually.

  • kaycey2022 6 days ago

    Maybe it wasn't your point. But cost of development is a very important factor, considering some of the thinking models burn tokens like no tomorrow. Accuracy is another. Maybe your script is kind of trivial/inconsequential so it doesn't matter if the output has some bugs as long as it seems to work. There are a lot of throwaway scripts we write, for which LLMs are an excellent tool to use.