Context engineering
(chrisloy.dev)81 points by chrisloy 21 hours ago
81 points by chrisloy 21 hours ago
Any of the "design patterns" listed in the article will have a ton of popular open source implementations. For structured generation, I think outlines is a particularly cool library, especially if you want to poke around at how constrained decoding works under the hood: https://github.com/dottxt-ai/outlines
There is nothing precise about crafting prompts and context—it's just that, a craft. Even if you do the right thing and check some fuzzy boundary conditions using autoscorers, the model can still change out from beneath you at any point and totally alter the behavior of your system. There is no formal language here. After all, mathematics exists because natural language is notoriously imprecise.
The article has some good practical tips and it's not on the author but man I really wish we'd stop abusing the term "engineering" in a desperate attempt to stroke our own egos and or convince people to give us money. It's pathetic. Coming up with good inputs to LLMs is more art than science and it's a craft. Call a spade a spade.
I think it's fair to question the use of the term "engineering" throughout a lot of the software industry. But to be fair to the author, his focus in the piece is on design patterns that require what we'd commonly call software engineering to implement.
For example, his first listed design pattern is RAG. To implement such a system from scratch, you'd need to construct a data layer (commonly a vector database), retrieval logic, etc.
In fact I think the author largely agrees with you re: crafting prompts. He has a whole section admonishing "prompt engineering" as magical incantations, which he differentiates from his focus here (software which needs to be built around an LLM).
I understand the general uneasiness around using "engineering" when discussing a stochastic model, but I think it's worth pointing out that there is a lot of engineering work required to build the software systems around these models. Writing software to parse context-free grammars into masks to be applied at inference, for example, is as much "engineering" as any other common software engineering project.
"Context crafting", ok, sure. I think a lot of expert researchers (like simonw) would agree.
My thoughts exactly. The author is saying we should think strategically about the use of context. Sure. Yes. But for that to qualify as engineering we need solid theory about how context works.
We don’t have that, yet. For instance experiments show that not all parts of the context window are equally well attended. Imagine trying to engineer a bridge when no one really knows how strong steel is.
It's available, https://buttondown.com/chrisloy/rss but it's not in sync with the blog, just a single 2024 entry found. :shrug:
Yes, and we've also decided that they deserve the title "engineering" more than software engineering does.
Most engineering disciplines have to deal with tolerances and uncertainty - the real world is non-deterministic.
Software engineering is easy in comparison because computers always do exactly what you tell them to do.
The ways LLMs fail (and the techniques you have to use to account for that) have more in common than physical engineering disciplines than software engineering does!
Yep. Consider woodworking - the wood you use might warp over time, or maybe part of it ends up in the sun or the thing you’ll make gets partly exposed to water.
Can you make a thing that’ll serve its purpose and look good for years under those constraints? A professional carpenter can.
We have it easy in software.
Woodworking is to civil engineering as being an IT help desk rep is to being a software engineer. Woodworking isn't engineering either. If you build a system with aspects you can measure and predictably tune, you're engineering. If you're making skilled alterations to an existing structure or system without applied math or science, you're partaking in a craft.
Software engineering blurs the lines, sure, but woodworking isn't engineering ever.
Classic shilling behavior of the insufferably embarrassing: redefining words to the benefit of those who pay your bills to the confusion of everyone else.
The definition of engineering, according to people outside the pocket of the llm industry:
> The application of scientific and mathematical principles to practical ends such as the design, manufacture, and operation of efficient and economical structures, machines, processes, and systems.
How do these techniques apply scientific and mathematical principals?
I would argue to do either of those requires reproducibility, and yet somehow you are arguing the less reproducible something is that the more like "physical engineering" it becomes.
Physical engineers might scoff good-naturedly at an attempt by project managers to refer to work scheduling as "logistics engineering".
But they really shouldn't because obviously scheduling and logistics is difficult, involving a lot of uncertainty and tolerances.
Uncertainty and tolerance implies that you have a predictable distribution in the first place.
Engineers are not just dealing with a world of total chaos, observing the output of the chaos, and cargo culting incantations that seem to work for right now [1]…oh wait nevermind we’re doing a different thing today! Have you tried paying for a different tool, because all of the real engineers are using Qwghlm v5 Dystopic now?
There’s actually real engineering going on in the training and refining of these models, but I personally wouldn’t include the prompting fad of the week to fall under that umbrella.
[1] I hesitate to write that sentence because there was a period where, say, bridges and buildings were constructed in this manner. They fell down a lot, and eventually we made predictable, consistent theoretical models that guide actual engineering, as it is practiced today. Will LLM stuff eventually get there? Maybe! But right now we’re still plainly in the phase of trying random shit and seeing what falls down.
The tools mechanical and civil engineers use are predictable. You're confusing the things these engineers design, which have tolerances and things like that, with the tools themselves.
If an engineer built an internal combustion engine that misfired 60% of the time, it simply wouldn't work.
If an engineer measured things with a ruler that only measured correctly 40% of the time, that would be the apt analogy.
The tool isn't what makes engineering a practice, it's the rigor and the ability to measure and then use the measurements to predict outcomes to make things useful.
Can you predict the outcome from an LLM with an "engineered" prompt?
No, and you aren't qualified to even comment on it since your only claim to fame is a fucking web app
I completely agree that much of software engineering is not engineering, and building systems around LLMs is no better in this sense.
When the central component of your system is a black box that you cannot reason about, have no theory around, and have essentially no control over (a model update can completely change your system behavior) engineering is basically impossible from the start.
Practices like using autoscorers to try and constrain behaviors helps, but this doesn't make the enterprise any more engineering because of the black box problem. Traditional engineering disciplines are able to call themselves engineering only because they are built on sophisticated physical theories that give them a precise understanding of the behaviors of materials under specified conditions. No such precision is possible with LLMs, as far as I have seen.
The determinism of traditional computing isn't really relevant here and targets the wrong logical level. We engineer systems, not programs.
This is completely backwards. Engineers built steam engines first through trial and error and then eventually the laws of thermodynamics were invented to explain how steam engines work.
Trial and error and fumbling around and creating rules of thumbs for systems you don’t entirely understand is the purest form of engineering.
I would argue it's more correct to call that phase experimentation. I doubt the early manufacturers of steam machines would even call themselves engineers in a serious or precise sense. They were engineers in the sense of "builder of engine" as a specific object, but the term's meaning has evolved from that basic initial usage.
A discipline becomes engineering when we achieve a level of understanding. such that we can be mathematically precise about it. Of course experimentation and trial and error are a fundamental part of that process, but there's a reason we have a word to distinguish processes which become more certain and precise thereafter and why we don't just call anything and everything engineering of some form.
>The ways LLMs fail (and the techniques you have to use to account for that) have more in common than physical engineering disciplines than software engineering does!
Ah yes, the God given free parameters in the Standard Model, including obviously the random seed of a transformer. What if just put 0 in the inference temperature? The randomness in llms is a technical choice to generate variations in the selection of the next token. Physical engineering? Come on.
"professionally trained & legally responsible for the results" is definitely not the same thing as what we used to just call "good at googling".
Based on the comments, I expected this to be slop listing a bunch of random prompt snippets from the author's personal collection.
I'm honestly a bit confused at the negativity here. The article is incredibly benign and reasonable. Maybe a bit surface level and not incredibly in depth, but at a glance, it gives fair and generally accurate summaries of the actual mechanisms behind inference. The examples it gives for "context engineering patterns" are actual systems that you'd need to implement (RAG, structured output, tool calling, etc.), not just a random prompt, and they're all subject to pretty thorough investigation from the research community.
The article even echoes your sentiments about "prompt engineering," down to the use of the word "incantation". From the piece:
> This was the birth of so-called "prompt engineering", though in practice there was often far less "engineering" than trial-and-error guesswork. This could often feel closer to uttering mystical incantations and hoping for magic to happen, rather than the deliberate construction and rigorous application of systems thinking that epitomises true engineering.
Most of the inference techniques (what the author calls context engineering design patterns) listed here originally came from the research community, and there are tons of benchmarks measuring their effectiveness, as well as a great deal of research behind what is happening mechanistically with each.
As the author points out, many of the patterns are fundamentally about in-context learning, and this in particular has been subject to a ton of research from the mechanistic interpretability crew. If you're curious, I think this line of research is fascinating: https://transformer-circuits.pub/2022/in-context-learning-an...
Are there any open source examples of good context engineering or agent systems?