Comment by OGWhales

Comment by OGWhales 17 hours ago

1 reply

I've found this a really useful strategy in many situations when working with LLMS. It seems odd that it works, since one one think its ability to give a good reply to such a question means it already "understands" your intent in the first place, but that's just projecting human ability onto LLMS. I would guess this technique is similar to how reasoning modes seems to improve output quality, though I may misunderstand how reasoning modes work.

ako 16 hours ago

Works for humans the same? Even if you know how to do a complex project, it helps to first document the approach, and then follow it.