Comment by nuancebydefault

Comment by nuancebydefault 2 days ago

18 replies

The article discusses basically 2 new problems with using agentic AI:

- When one of the agents does something wrong, a human operator needs to be able to intervene quickly and needs to provide the agent with expert instructions. However since experts do not execute the bare tasks anymore, they forget parts of their expertise quickly. This means the experts need constant training, hence they will have little time left to oversee the agent's work.

- Experts must become managers of agentic systems, a role which they are not familiar with, hence they are not feeling at home in their job. This problem is harder to be determined as a problem by people managers (of the experts) since they don't experience that problem often first hand.

Indeed the irony is that AI provides efficiency gains, which as they become more widely adopted, become more problematic because they outfit the necessary human in the loop.

I think this all means that automation is not taking away everyone's job, as it makes things more complicated and hence humans can still compete.

grvdrm 2 days ago

Your first problem doesn’t feel new at all. Reminded me of a situation several years ago. What was previous Excel report was automated into PowerBI. Great right? Time saved. Etc.

But the report was very wrong for months. Maybe longer. And since it was automated, the instinct to check and validate was gone. And tracking down the problem required extra work that hadn’t been part of the Excel flow

I use this example in all of my automation conversations to remind people to be thoughtful about where and when they automate.

  • all2 2 days ago

    Thoughtfulness is sometimes increased by touch time. I've seen various examples of this over time; teachers who must collate and calculate grades manually showed improved outcomes for their students, test techs who handle hardware becoming acutely aware of the many failure modes of the hardware, and so on.

    • grvdrm 19 hours ago

      Said another way: extra touch might mean more accountable thinking.

      Higher touch: "I am responsbile for creating this report. It better be right" Automated touch: "I sent you the report, it's right because it's automated"

      Mistakes possible either way. But I like higher-touch in many situations.

      Curious if you have links to examples you mention?

      • all2 13 hours ago

        The teacher example was from one of those pop-psych books on being more efficient with one's time. I can't remember the title off the top of my head. Another example in the book applied the author's model of thinking to a plane crash in the Pacific. I'm sorry, man. It's been a long time.

asielen 2 days ago

The way you put that makes be think of the current challenge younger generations are having with technology in general. Kids who were raised on touch screen interfaces vs kids in older generations who were raised on computers that required more technical skill to figure out.

In the same way, when everything just works, there will be no difference, but when something goes wrong, the person who learned the skills before will have a distinct advantage.

The question is if AI gets good enough that slowing down occasionally to find a specialist is tenable. It doesn't need to be perfect, it just needs to be predicably not perfect.

Expertw will always be needed, but they may be more like car mechanics, there to fix hopefully rare issues and provide a tune up, rather than building the cars themselves.

  • jeffreygoesto 2 days ago

    Car mechanics face the same problem today with rare issues. They know the mechanical standard procedures and that they can not track down a problem but only try to flash over an ECU or try swapping it. They also don't admit they are wrong, at least most of the time...

    • c0balt 2 days ago

      > only try to flash over an ECU or try swapping it.

      To be fair, they have wrenches thrown in their way there as many ECUs and other computer-driven components are fairly locked down and undocumented. Especially as the programming software itself is not often freely distributed (only for approved shops/dealers).

delaminator 2 days ago

I used to be a maintenance data analyst in a welding plant welding about 1 million units per month.

I was the only person in the factory who was a qualified welder.

layer8 2 days ago

They also made the point that the less frequent failures become, the more tedious it is for the human operator to check for them, giving the example of AI agents providing verbose plans of what they intend to do that are mostly fine, but will occasionally contain critical failures that the operator is supposed to catch.

DiscourseFan 2 days ago

That's how it tends to go, automation removes some parts of the work but creates more complexity. Sooner or later that will also be automated away, and so on and so forth. AGI evangelists ought to read Marx's Capital.

  • jennyholzer2 2 days ago

    I seriously doubt that there is even one "AGI evangelist" who has the intellectual capacity to read books written for adult audiences.

    • bitwize 2 days ago

      Marxists have the tendency to think that the Venn diagram of "people who have read and understand Marx" and "Marxists" is a circle. There are plenty of AGI evangelists who are smart enough to read Marx, and many of them probably have. The problem is that, being technolibertarians and that, they think Marx is the enemy.

      • DiscourseFan 2 days ago

        That seems patently absurd, considering that the debate is not between marxists and non-marxists but accelerationists and orthodox marxists, who are both readers of marx, its just that the former is in alignment with technolibertarianism.

    • ctoth 2 days ago

      Hi. I am not an evangelist -- I'm quite certain it's going to kill us all! But I would like to think that I'm about the closest thing to an AI booster you might find here, given that I get so much damn utility out of it. I'm interested in reading, I probably read too much! would you like to suggest a book we can discuss next week? I'd be happy to do this with you.

      • wizzwizz4 a day ago

        If you're "quite certain it's going to kill us all", then you are extremely foolish to not be opposing it. Do you think there's some kind of fatalistic inevitability? If so… why? Conjectures about the inevitable behaviour of AI systems only apply once the AI systems exist.