mistersquid a day ago

> before LLMs I definitely thought coding would be the last thing to go.

While LLMs do still struggle to produce high quality code as a function of prompt quality and available training data, many human software developers are surprised that LLMs (software) can generate quality software at all.

I wonder to what extent this surprise is because people tend to think very deeply when writing software and assume thinking and "reasoning" are what produce quality software. What if the experience of "thinking" and "reasoning" are epiphenomena of the physical statistical models present in the connections of our brains?

This is an unsolved and ancient philosophical problem (i.e. the problem of duality) of whether consciousness and free will affect the physical world. If we live in a materialist universe where matter and the laws of physics are unaffected by consciousness then "thinking", "reasoning", and "free will" are purely subjective. In such a view, subjective experience attends material changes in the world but does not affect the material world.

Software developers surprised by the capabilities of software (LLMs) to write software might not be so surprised if they understood consciousness as an epiphenomenon of materiality. Just as words do not cause diaphragms to compress lungs to move air past vocal cords and propagate air vibrations, perhaps the thoughts that attend action (including the production of words) are not the motive force of those actions.

  • autoexec 20 hours ago

    > I wonder to what extent this surprise is because people tend to think very deeply when writing software and assume thinking and "reasoning" are what produce quality software.

    It takes deep thought and reasoning to produce good code. LLMs don't think or reason. They don't have to though because humans have done all of that for them. They just have to regurgitate what humans have already done. Everything good an LLM outputs came from the minds of humans who did all the real work. Sometimes they can assemble bits of human generated code in ways that do something useful, just like someone copying and pasting code out of stack exchange without understanding any of it can sometimes slap something together that does something useful.

    LLMs are a neat party trick, and it can be surprising to see what they do and fun to see where they fail, but it all says very little about what it means to think and reason or even what it means to write software.

stickfigure a day ago

I'm still of the opinion that coding will be the last thing to go. LLMs are an enabler, sure, but until they integrate some form of neuroplasticity they're stuck working on Memento-guy-sized chunks of code. They need a human programmer to provide long context.

Maybe some new technique will change that, but it's not guaranteed. At this point I think we can safely surmise that scaling isn't the answer.

  • forgetfulness 20 hours ago

    I’m more inclined to believe that no jobs (as in trades, professions) will go, but programming will be the most automated, along with design and illustration.

    Why? To this day still they’re the showcase of what LLMs “can” do for (to) a line of work, but they’re the only ones with all the relevant information online.

    For programming, there’s decades of textbooks, online docs, bug tracker tickets, source code repositories, troubleshooting on forums, all laying out how a profession is exercised from start to finish.

    There’s hardly a fraction of this to automate the tasks of the average Joe who does some paperwork the model has never seen, who’s applying some rough procedures we would call “heuristics” to some spreadsheets and emails, and has to escalate to his supervisor for things out of code several times a day.

jollyllama 18 hours ago

Beyond training data availability, it's always easiest to automate what you understand. Since software engineering is a subset of the discipline of AI/LLMs, it has been automated to the extent that it has. Everything else involves more domain knowledge.

zoeysmithe 21 hours ago

I'm not sure what happens when you replace coders with 'prompt generalists' and the output has non-trivial bugs. What do you do then? The product is crashing and the business is losing money? Or a security bug? You can't just tell llm's "oh wait what you made is bad, make it better." At a certain point, that's the best it can make. And if you dont understand the security or engineering issue behind the bug, even if the llm can fix this, you don't have the skills to prompt it correctly to do so.

I see tech as 'the king's guard' of capitalism. They'll be the last to go because at the end of the day, they need to be able to serve the king. 'Prompt generalists' are like replacing the king's guard with a bunch of pampered royals who 'once visited a battlefield.' Its just not going to work when someone comes at the king.

  • autoexec 20 hours ago

    > You can't just tell llm's "oh wait what you made is bad, make it better." At a certain point, that's the best it can make. And if you dont understand the security or engineering issue behind the bug, even if the llm can fix this, you don't have the skills to prompt it correctly to do so.

    In that case, the idea is that you'd see most programmers in the company replaced by a much smaller group of prompt generalists who work for peanuts, while the company keeps on a handful of people who actually know how to program and do nothing all day long but debug AI written code.

    When things crash or a security issue comes up they bring in the team of programmers, but since they only need a small number of them to get the AI code working again most programmers would be out of a job. High numbers of people who actually like touching code for a living will compete for the very small number of jobs available driving down wages.

    In the long term, this would be bad because a lot of talented coders won't be satisfied being QA to AI slop and will move on to other passions. Everything AI knows it learned from people who had the skill to do great things, but once all the programmers are just debugging garbage AI code there will be fewer programmers doing clever things and posting their code for AI to scrape and regurgitate. Tech will stagnate since AI can't come up with anything new and will only have its own slop to learn from.

    Personally, I doubt it'll happen that way. I'm skeptical that LLMs will become good enough to be a real threat. Eventually the AI bubble will burst as companies realize that chatbots aren't ever going to be AGI, will never get good enough to replace most of their employees, and once they see that they're still going to be stuck paying the peasant class things will slowly get back to normal.

    In the meantime, expect random layoff and rehires (at lower wages) as companies try and fail to replace their pesky human workers with AI, and expect AI to be increasingly shoehorned into places it has no business being and screwing things up making your life harder in new and frustrating ways