Comment by Kiyo-Lynn

Comment by Kiyo-Lynn 17 hours ago

4 replies

When I write with AI, it feels smooth in the moment, but I’m not really thinking through the ideas. The writing sounds fine, but when I look back later, I often can’t remember why I phrased things that way.

Now I try to write my own draft first, then use AI to help polish it. It takes a bit more effort upfront, but I feel like I learn more and remember things better.

energy123 16 hours ago

The rule of thumb "LLMs are good at reducing text, not expanding it" is a good one here.

  • falcor84 9 hours ago

    > "LLMs are good at reducing text, not expanding it"

    You put it in quote marks, but the only search results are from you writing it here on HN. Obviously LLMs are extremely good at expanding text, which is essentially what they do whenever they continue a prompt. Or did you mean that in a prescriptive way - that it would be better for us to use it more for summarizing rather than expanding?

    • 542354234235 6 hours ago

      >You put it in quote marks, but the only search results are from you writing it here on HN.

      They said it was a rule of thumb, which is a general rule based on experience. In context with the comment they were replying to, it seems that they are saying that if you want to learn and understand something, you should put the effort in yourself first to synthesize your ideas and write out a full essay, then use an LLM to refine, tighten up, and polish it. In contrast to using an LLM as you go to take your core ideas and expand them. Both might end up very good essays, but your understanding will be much deeper if you follow the "LLMs are good at reducing text, not expanding it" rule.

  • devmor 11 hours ago

    Probably interesting to note that this is almost always true of weighted randomness.

    If you have something that you consider to be over 50% towards your desired result, reducing the space of the result has a higher chance of removing the negative factor than the positive.

    In contrast, any case that the algorithm is less than 100% capable of producing the positive factor, adding on to the result could always increase the negative factor more than the positive, given a finite time constraint (aka any reasonable non-theoretical application).