Comment by krackers

Comment by krackers a day ago

4 replies

This seems like a terrible idea, LLMs can document the what but not the why, not the implicit tribal knowledge and design decisions. Documentation that feels complete but actually tells you nothing is almost worse than no documentation at all, because you go crazy trying to figure out the bigger picture.

simonw a day ago

Have you tried it? It's absurdly useful.

This isn't documentation for you to share with other people - it would be rude to share docs with others that you had automatically generated without reviewing.

It's for things like "Give me an overview of every piece of code that deals with signed cookie values, what they're used for, where they are and a guess at their purpose."

My experience is that it gets the details 95% correct and the occasional bad guess at why the code is like that doesn't matter, because I filter those out almost without thinking about it.

  • jeltz a day ago

    Yes, I have. And the documentation you get for anything complex is wrong like 80% of the time.

    • embedding-shape a day ago

      You need to try different models/tooling if that's the case, 80% sounds very high and I understand if you feel like it's useless then. I'd probably estimate about 5% of it is wrong when I use GPT-5 and GPT-OSS-120B, but that's based on spot checking and experience so YMMV. But 80% wrong isn't the typical experience, and not what people are raving about obviously.

    • NewsaHackO 21 hours ago

      80% of the time? Are you sure you aren't hallucinating?