Comment by stego-tech
Comment by stego-tech a day ago
Finally, someone articulates the middle better than I have:
> Thus, the extreme proponents of manic “agentic” vibe coding,[19] and the outright denouncers of LLMs, are both missing the forest for the trees. That there is a pragmatic middle path, where people who have the experience, expertise, competence, and the ability to articulate can use these tools to get the outcomes they desire with the right sets of trade-offs.
LLMs are tools. They are not gods to replace man, nor are they exclusively means of harm. It is entirely possible to denounce the blatant attempt of rent extraction in the form of OpenAI, Google Gemini, Microsoft CoPilot, Anthropic, and others, while still running Qwen and Ministral and their like on local hardware. You can, in fact, have it “both ways”.
As fun as it is to poke at cloud services to see their new features and advancements, I (in IT terms) personally could never recommend them in any serious enterprise context for the very reason that they’re fundamentally insecure; you will never, ever have full E2EE with these services because it would nullify their ability to evolve, improve, monetize, and exploit.
That said? I can truly be a one dinosaur army in an enterprise now, as a generalist with a modest Mac Mini (or Studio) and a local LLM to fill in edge cases as needed. I can query these local tools for questions on database schemas or build a one-off API integration for me, so I can focus on the substance of work in safeguarding and accelerating the enterprise. I don’t need fleets of specialists unless I’m running a huge conglomerate or have specific needs - and even then, it’s going to be cheaper to retain one or two seniors to direct the larger army of generalists when needed. The landscape has changed, and it’s why I target leadership and management roles accordingly with my sales pitch (“One senior generalist can do the work of three mid-level specialists”).
Don’t get me wrong, I still have immense grievances regarding theft of work, reductions in force, rent extraction, and the seeming attempt at destroying local general compute, but local LLMs as a tool have been in my kit for years, and that’s not going away.