aeve890 2 days ago

I'd say this shit is even worse that "good at googling". Literal incantation for stochastic machines is like just two notches above checking the horoscope.

  • calebkaiser 2 days ago

    Based on the comments, I expected this to be slop listing a bunch of random prompt snippets from the author's personal collection.

    I'm honestly a bit confused at the negativity here. The article is incredibly benign and reasonable. Maybe a bit surface level and not incredibly in depth, but at a glance, it gives fair and generally accurate summaries of the actual mechanisms behind inference. The examples it gives for "context engineering patterns" are actual systems that you'd need to implement (RAG, structured output, tool calling, etc.), not just a random prompt, and they're all subject to pretty thorough investigation from the research community.

    The article even echoes your sentiments about "prompt engineering," down to the use of the word "incantation". From the piece:

    > This was the birth of so-called "prompt engineering", though in practice there was often far less "engineering" than trial-and-error guesswork. This could often feel closer to uttering mystical incantations and hoping for magic to happen, rather than the deliberate construction and rigorous application of systems thinking that epitomises true engineering.

    • [removed] 2 days ago
      [deleted]
    • timr 2 days ago

      There’s nothing particularly wrong with the article - it’s a superficial summary of stuff that has historically happened in the world of LLM context windows.

      The problem is - and it’s a problem common to AI right now - you can’t generalize anything from it. The next thing that drives LLMs forward could be an extension of what you read about here, or it could be a totally random other thing. There are a million monkeys tapping on keyboards, and the hope is that someone taps out Shakespeare’s brain.

      • calebkaiser a day ago

        I don't really understand this line of criticism, in this context.

        What would "generalizing" the information in this article mean? I think the author does a good job of contextualizing most of the techniques under the general umbrella of in-context learning. What would it mean to generalize further beyond that?