Comment by simonw
Right: these things amplify existing skills. The more skill you have, the bigger the effect after it gets amplified.
Right: these things amplify existing skills. The more skill you have, the bigger the effect after it gets amplified.
For sure, directing attention to valuable context and outlining problems to solve within it works way, way better than vague uncertainty.
Good LLMing seems to be about isolating the right information and instructing it correctly from there. Both the context and the prompt make a tremendous difference.
I've been finding recently that I can get significantly better results with fewer tokens by paying mind to this more often.
I'm definitely a casual though. There are probably plenty of nuances and tricks I'm unaware of.
Interestingly, this observation holds even when you scale AI use up from individuals to organizations, only at that level it amplifies your organization's overal development trajectory. The DORA 2025 and the DX developer survey reports find that teams with strong quality control practices enjoy higher velocity, whereas teams with weak or no processes suffer elevated issues and outages.
It makes sense considering that these practices could be thought of as "institutionalized skills."
I jumped into a new-to-me Typescript application and asked Claude to build a thing, in vague terms matching my own uncertainty and unfamiliarity. The result was similarly vague garbage. Three shots and I threw them all away.
Then I watched a someone familiar with the codebase ask Claude to build the thing, in precise terms matching their expertise and understanding of the code. It worked flawlessly the first time.
Neither of us "coded", but their skill with the underlying theory of the program allowed them to ask the right questions, infinitely more productive in this case.
Skill and understanding matter now more than ever! LLMs are pushing us rapidly away from specialized technicians to theory builders.