Comment by falcor84

Comment by falcor84 6 months ago

4 replies

I don't quite see their point. Obviously if you're delegating the task to someone/something then you're not getting as good at it as if you were to do it yourself. If I were to write machine code by hand, rather than having the compiler do it for me, I would definitely be better at it and have more neural circuitry devoted to it.

As I see it, it's much more interesting to ask not wherther we are still good at doing the work that computers can do for us, but whether we are now able to do better at the higher-level tasks that computers can't yet do on their own.

devmor 6 months ago

Your question is answered by the study abstract.

> Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

  • falcor84 6 months ago

    But it's not that they "underperformed" at life in general - they underperformed when assessed on various aspects of the task that they weren't practicing. To me it's as if they ran a trial where one group played basketball, while another were acting as referees - of course that when tested on ball control, those who were dribbling and throwing would do better, but it tells us nothing about how those acting as referees performed at their thing.

    • devmor 6 months ago

      I see what you’re getting at now. I agree I’d like to see a more general trial that measures general changes in problem solving ability after a test group is set at using LLMs for a specific problem solving task vs a control group not using them.

      • rightbyte 6 months ago

        Are there such tests? Sounds like IQ tests to me, which is a quite indirect measurement.