Comment by groby_b

Comment by groby_b 20 hours ago

8 replies

Given that the entire industry is in a frenzy to enable "agentic" AI - i.e. hook up tools that have actual effects in the world - that is at best a rather native take.

Yes, LLMs can and do take actions in the world, because things like MCP allow them to translate speech into action, without a human in the loop.

actsasbuffoon 18 hours ago

Exactly this. 70% of CEOs say that they hope to be able to lay people off and replace them with an LLM soon. It doesn’t matter that LLMs are incapable of reasoning at even the same level as an elementary school child. They’ll do it because it’s cheap and trendy.

Many companies are already pushing LLMs into roles where they make decisions. It’s only going to get worse. The surface area for attacks against LLM agents is absolutely colossal, and I’m not confident that the problems can be fixed.

  • musicale 14 hours ago

    > 70% of CEOs say that they hope to be able to lay people off and replace them with an LLM soon

    Is the layoff-based business model really the best use case for AI systems?

    > The surface area for attacks against LLM agents is absolutely colossal, and I’m not confident that the problems can be fixed.

    The flaws are baked into the training data.

    "Trust but verify" applies, as do Murphy's law and the law of unintended consequences.

3np 18 hours ago

I see much more of offerings pushing these flows onto the market than actually adopting those flows in practice. It's a solution in search of a problem and I doubt most are fully eating their own dogfood as anything but contained experiments.

throw10920 15 hours ago

> that is at best a rather native take.

No more so than correctly pointing out that writing code for ffmpeg doesn't mean that you're enabling streaming services to try to redefine the meaning of the phrase "ad-free" because you're allowing them to continue existing.

The problem is not the existence of the library that enables streaming services (AI "safety"), it's that you're not ensuring that the companies misusing technology are prevented from doing so.

"A company is trying to misuse technology so we should cripple the tech instead of fixing the underlying social problem of the company's behavior" is, quite frankly, an absolutely insane mindset, and is the reason for a lot of the evil we see in the world today.

You cannot and should not try to fix social or governmental problems with technology.

what 18 hours ago

That would still be on whomever set up the agent and allowed it to take action though.

  • mitthrowaway2 18 hours ago

    To professional engineers who have a duty towards public safety, it's not enough to build an unsafe footbridge and hang up a sign saying "cross at your own risk".

    It's certainly not enough to build a cheap, un-flight-worthy airplane and then say "but if this crashes, that's on the airline dumb enough to fly it".

    And it's very certainly not enough to put cars on the road with no working brakes, while saying "the duty of safety is on whoever chose to turn the key and push the gas pedal".

    For most of us, we do actually have to do better than that.

    But apparently not AI engineers?

    • what 17 hours ago

      Maybe my comment wasn’t clear, but it is on the AI engineers. Anyone that deploys something that uses AI should be responsible for “its” actions.

      Maybe even the makers of the model, but that’s not quite clear. If you produced a bolt that wasn’t to spec and failed, that would probably be on you.

  • actsasbuffoon 18 hours ago

    As far as responsibility goes, sure. But when companies push LLMs into decision-making roles, you could end up being hurt by this even if you’re not the responsible party.

    If you thought bureaucracy was dumb before, wait until the humans are replaced with LLMs that can be tricked into telling you how to make meth by asking them to role play as Dr House.