glemion43 a day ago

I'm carrying a thought around for the last few weeks:

A LLM is a transformer. It transforms a prompt into a result.

Or a human idea into a concrete java implementation.

Currently I'm exploring what unexpected or curious transformations LLMs are capable of but haven't found much yet.

At least I myself was surprised that an LLM can transform a description of something into an IMG by transforming it into a SVG.

  • durch a day ago

    Format conversions (text → code, description → SVG) are the transformations most reach for first. To me the interesting ones are cognitive: your vague sense → something concrete you can react to → refined understanding. The LLM gives you an artifact to recognize against. That recognition ("yes, more of that" or "no, not quite") is where understanding actually shifts. Each cycle sharpens what you're looking for, a bit like a flywheel, each feeds into the next one.

    • golemotron 8 hours ago

      That's true, but it can be a trap. I recommend always generating a few alternatives to avoid our bias toward the first generation. When we don't do that we are led rather than leading.

calmoo a day ago

Ironically your comment is clearly written by an LLM.

  • durch a day ago

    Ironic indeed: pattern-matching the prose style instead of engaging the idea is exactly the shallow reading the post is about.

    • calmoo a day ago

      Your original comment is completely void of any substance or originality. Please don't fill the web with robot slop and use your own voice. We both know what you're doing here.

      • drekipus a day ago

        I dunno, he might have just been reading too much that he really writes like this now. I've seen it happen.

        • calmoo a day ago

          no, definitely not. It was 100% LLM written. Look at their post history.

afro88 a day ago

LLMs are generators, and that was the correct way to view them at the start. Agents explore.

  • durch a day ago

    Generator vs. explorer is a useful distinction, but it's incomplete. Agents without a recognition loop are just generators with extra steps.

    What makes exploration valuable is the cycle: act, observe, recognize whether you're closer to what you wanted, then refine. Without that recognition ("closer" or "drifting"), you're exploring blind.

    Context is what lets the loop close. You need enough of it to judge the outcome. I think that real shift isn't generators → agents. It's one-shot output → iterative refinement with judgment in the loop.

    • throwawaySimon a day ago

      Please stop.

      • durch a day ago

        Something in there you'd like to discuss further, I've been thinking a lot about these ideas ever since LLMs came around, and I think these are many more of these discussion ahead of us...

        • throwawaySimon a day ago

          Kind of tedious trying to have a discussion with someone who clearly generates their part.