Comment by mgraczyk

Comment by mgraczyk 4 days ago

50 replies

The sad reality is that this is probably not a solvable problem. AI will improve more rapidly than the education system can adapt. Within a few years it won't make sense for people to learn how to write actual code, and it won't be clear until then which skills are actually useful to learn.

My recommendation would be to encourage students to ask the LLM to quiz and tutor them, but ultimately I think most students will learn a lot less than say 5 years ago while the top 5% or so will learn a lot more

JumpCrisscross 4 days ago

> AI will improve more rapidly than the education system can adapt

We’ll see a new class division scaffolded on the existing one around screens. (Schools in rich communities have no screens. Students turn in their phones and watches at the beginning of the day. Schools in poor ones have them everywhere, including everywhere at home.)

  • rdudek 4 days ago

    Every school has students work off their Chromebooks here in Colorado, regardless of how rich community is. This started with the Covid lockdowns and is pretty much standard now.

    • JumpCrisscross 3 days ago

      > Every school has students work off their Chromebooks here in Colorado

      I specifically remember Telluride Mountain School’s banners in town advertising a low-tech approach.

ethmarks 4 days ago

> most students will learn a lot less than say 5 years ago while the top 5% or so will learn a lot more

If we assume that AI will automate many/most programming jobs (which is highly debatable and I don't believe is true, but just for the sake of argument), isn't this a good outcome? If most parts of programming are automatable and only the really tricky parts need human programmers, wouldn't it be convenient if there are fewer human programmers but the ones that do exist are really skilled?

  • mgraczyk 4 days ago

    [flagged]

    • ethmarks 4 days ago

      Well, as a college student planning to start a CS program, I can tell you that it actually sounds fine to me.

      And I think that teachers can adapt. A few weeks ago, my English professor assigned us an essay where we had to ask ChatGPT a question and analyze its response and check its sources. I could imagine something similar in a programming course. "Ask ChatGPT to write code to this spec, then iterate on its output and fix its errors" would teach students some of the skills to use LLMs for coding.

      • mgraczyk 4 days ago

        This is probably useful and better than nothing, but the problem is that by the time you graduate it's unlikely that reading the output of the LLM will be useful.

    • moltopoco 3 days ago

      Right, but if AI gets to the point where it can replace developers (which includes a lot of fuzzy requirement interpretation etc.); then it will replace most other jobs as well, and it wouldn't have helped to become a lawyer or doctor.

    • JumpCrisscross 4 days ago

      > It's not good if you're a freshman currently starting a CS program

      CS is the new MBA. A thoughtless path to a safe, secure job.

      Cruelly, but necessarily, a society has to destroy those pathways. Otherwise, it becomes sclerotic.

      • DiscourseFan 3 days ago

        Its not cruel, its stupid. Why would we organize our society in such a way that people would be drawn towards such paths in the first place, where your comfort and security are your first concerns and taking risks, doing something new, is not even on your mind?

        • JumpCrisscross 3 days ago

          > where your comfort and security are your first concerns and taking risks, doing something new, is not even on your mind?

          Because individually, lots of people seek low-risk high-return occupations. Systematically, that doesn’t exist in the long run.

          Societies do better when they take risks. Encouraging the population to integrate that risk taking has been a running them in successful societies from the Romans and Chinese dynasties through American commerce and jugaad.

    • DiscourseFan 3 days ago

      How about switching to English? There is a high demand for people who are very good at communication and writing nowadays.

  • JackSlateur 3 days ago

    The only task required from a dev is to think

    AI does not think

    Ergo, AI will not take "programming jobs"

    It may highlight some "fraud people" (do not know how to say it in english .. you know, people who fake the job so hard but are just clowns, do not produce anything, are basically worthless and just here to grab some money as long as the fraud is working)

quantumHazer 3 days ago

> it won’t make sense to learn how to code.

Sure. So we can keep paying money to your employer, Anthropic, right?

andrei_says_ 4 days ago

> Within a few years it won't make sense for people to learn how to write actual code

Why?

Because LLMs are capable of sometimes working snippets of usually completely unmaintainable code?

  • ethmarks 4 days ago

    You can still argue that LLMs won't replace human programmers without downplaying their capabilities. Modern SOTA LLMs can often produce genuinely impressive code. Full stop. I don't personally believe that LLMs are good enough to replace human developers, but claiming that LLMs are only capable of writing bad code is ridiculous and easily falsifiable.

DANmode 4 days ago

For what it’s worth: OpenAI seems to be encouraging this with their “Study” mode

on some ChatGPT interfaces.

Madmallard 3 days ago

Bold claim by the Anthropic employee drinking their own Koolaid

  • falcor84 3 days ago

    I'm not an Anthropic employee and think that:

    >AI will improve more rapidly than the education system can adapt.

    Is entirely obvious, and:

    > Within a few years it won't make sense for people to learn how to write actual code, and it won't be clear until then which skills are actually useful to learn.

    is not obvious, but quite clear from how things are going. I expect actual writing of code "by hand" to be the same sort of activity as doing integrals by hand - something you may do either to advance the state of the art, or recreationally, but not something you would try to do "in anger" when faced with a looming project deadline.

    • anon7725 3 days ago

      > I expect actual writing of code "by hand" to be the same sort of activity as doing integrals by hand - something you may do either to advance the state of the art, or recreationally, but not something you would try to do "in anger" when faced with a looming project deadline.

      This doesn’t seem like a good example. People who engineer systems that rely on integrals still know what an integral is. They might not be doing it manually, but it’s still part of the tower of knowledge that supports whatever work they are doing now. Say you are modeling some physical system in Matlab - you know what an integral is, how it connects with the higher level work that you’re doing, etc.

      An example from programming: you know what process isolation is, and how memory is allocated, etc. You’re not explicitly working with that when you create a new python list that ends up on the heap, but it’s part of your tower of knowledge. If there’s a need, you can shake off the cobwebs and climb back down the tower a bit to figure something out.

      So here’s my contention: LLMs make it optional to have the tower of knowledge that is required today. Some people seem to be very productive with agentic coding tools today - because they already have the tower. We are in a liminal state that allows for this, since we all came up in the before time, struggling to get things to compile, scratching our heads at core dumps, etc.

      What happens when you no longer need to have a mental model of what you’re doing? The hard problems in comp sci and software engineering are no less hard after the advent of LLMs.

      • mgraczyk 3 days ago

        Here's one way to think about it

        Architects are not civil engineers and often don't know details of construction, project management, structural engineering etc. For a few years there will still be a role for a human "architect" but most of the specific low level stuff will be automated. Eventually there won't be an architect either but that may be 10 years away

      • Madmallard 3 days ago

        Optional tower of knowledge leads to a ballooning of incompetence and future problems

gerdesj 4 days ago

An LLM is a tool and its just as mad as slide rules, calculators and PCs (I've seen them all although slide rules were being phased out in my youth)

Coding via prompt is simply a new form of coding.

Remember that high level programming languages are "merely" a sop for us humans to avoid low level languages. The idea is that you will be more productive with say Python than you would with ASM or twiddling electrical switches that correspond to register inputs.

A purist might note that using Python is not sufficiently close to the bare metal to be really productive.

My recommendation would be to encourage the tutor to ask the student how they use the LLM and to school them in effective use strategies - that will involve problem definition and formulation and then an iterative effort to solve the problem. It will obviously involve how to spot and deal with hallucinations. They'll need to start discovering model quality for differing tasks and all sorts of things that look like sci-fi to me 10 years ago.

I think we are at, for LLMs, the "calculator on digital wrist watch" stage that we had in the mid '80s before the really decent scientific calculators rocked up. Those calculators are largely still what you get nowadays too and I suspect that LLMs will settle into a similar role.

They will be great tools when used appropriately but they will not run the world or if they do, not for very long - bye!

  • Krssst 4 days ago

    > Remember that high level programming languages are "merely" a sop for us humans to avoid low level languages.

    High-level languages are deterministic and reliable, making it possible for developers to be confident that their high-level code is correct. LLMs are anything but deterministic and reliable.

    • seanmcdirmid 3 days ago

      You keep saying this but have you used an LLM for coding before? You just don’t vibe code up some generated code (well, you can, but it will suck). You are asking it to iterate on code and multiple artifacts at the same time (like tests) in many steps, and you are providing feedback, getting feedback, providing clarifications, checking small chunks of work (because you didn’t just have it do everything at once), etc. You just aren’t executing “vibecode -d [do the thing]” like you would with a traditional shoot once code generator.

      It isn’t deterministic like a real programmer isn’t deterministic, and that’s why iteration is necessary.

    • tvshtr 4 days ago

      Not all code written by humans is deterministic and reliable. And properly guard-railed LLM can check its output, you can even employ several, for higher consensus certainty. And we're just fuckin starting.

      • Krssst 4 days ago

        Unreliable code is incorrect thus undesirable. We limit the risk through review and understanding what we're doing which is not possible when delegating the code generation and review.

        Checking output can be done by testing but test code in itself can be unreliable and testing in itself is no correctness guarantee.

        The only way reliable code could be produced without human touching it would be using formal specifications, having the LLM write the formal proof at the same time as the code and using some software to validate the proof. The formal specification would have to be written using some kind of programming language, and then we're somewhat back to square one (but with maybe a new higher level language where you only define the specs formally rather than how you implement them).

  • galaxyLogic 4 days ago

    But, we as humans still have a need to understand the outputs of AI. We can't delegate this understanding task to AI because then we wouldn't understand AI and thus we could not CONTROL what the AI is doing, optimize its behavior so it maximizes our benefit.

    Therefore, I still see a need for highlevel and even higher level languages, but ones which are easy for humans to understand. AI can help of course but challenge is how can we unambiguously communicate with machines, and express our ideas concisely and understandably for both us and for the machines.

  • ethmarks 4 days ago

    > My recommendation would be to encourage the tutor to ask the student how they use the LLM and to school them in effective use strategies

    It's obviously not quite the same as programming, but my English professor assigned an essay a few weeks ago where we had to ask ChatGPT a question and then analyze its response, check its sources, and try to spot hallucinations. It was worth about 5% of our overall grade. I thought that it was a fascinating exercise in teaching responsible LLM use.

  • JumpCrisscross 4 days ago

    > My recommendation would be to encourage the tutor to ask the student how they use the LLM and to school them in effective use strategies

    This reminds me of folks teaching their kids Java ten years ago.

    You’re teaching a tool. Versus general tool use.

    > Those calculators are largely still what you get nowadays too and I suspect that LLMs will settle into a similar role

    If correct, the child will be competent in the new world. If not, they will have wasted time developing general intelligence.

    This doesn’t strike me as a good strategy for anything other than time-consuming babysitting.

  • bgwalter 4 days ago

    > Coding via prompt is simply a new form of coding.

    No, it isn't. "Write me a parser for language X" is like pressing a button on a photocopier. The LLM steals content from open source creators.

    Now the desperate capital starved VC companies can downvote this one too, but be aware that no one outside of this site believes the illusion any longer.

    • ben_w 3 days ago

      > The LLM steals content from open source creators.

      Not according to court cases.

      Courts ruled that machine learning is a transformative use, and just fine.

      Pirating material to perform the training is still piracy, but open source licenses don't get that protection.

      A summary of one such court case: https://www.jurist.org/news/2025/06/us-federal-judge-makes-l...

      > "Write me a parser for language X" is like pressing a button on a photocopier.

      What is the prompt "review this code" in your view? Because LLM-automated code review is a thing now.

      • tjr 3 days ago

        Maybe pointless, but I for one disagree with such rulings. Existing copyright law was formed as a construct between human producers and human consumers. I doubt that any human producers prior to a few years ago had any clue that their work would be fed into proprietary AI systems in order to build machines that generate huge quantities of more such works, and I think it fair to consider that they might have taken a different path had they known this.

        To retroactively grant propriety AI training rights on all copyrighted material on the basis that it's no different from humans learning is, I think, misguided.

    • bdangubic 4 days ago

      there isn’t a company in the united states of 50 or more people which doesn’t have daily/weekly/monthly “ai” meetings (I’ve been attending dozens this year, as recently as tuesday). comments like yours exist only on HN where selected group of people love talking about bubbles and illusions while the rest of us are getting sh*t done at pace we could not fathom just year or so ago…

      • bgwalter 4 days ago

        I am sure that "AI" is great for generating new meetings and for creating documentation how valuable those meetings are. Also it is great at generating justifications for projects and how it speeds up those projects.

        I am sure that the 360° performance reviews have never looked better.

        Your experience is contradicted by the usually business friendly Economist:

        https://www.economist.com/finance-and-economics/2025/11/26/i...