Comment by niyikiza

Comment by niyikiza 3 days ago

39 replies

You're right, they should be responsible. The problem is proving it. "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures.

And when sub-agents or third-party tools are involved, liability gets even murkier. Who's accountable when the action executed three hops away from the human? The article argues for receipts that make "I didn't authorize that" a verifiable claim

bulatb 3 days ago

There's nothing to prove. Responsibility means you accept the consequences for its actions, whatever they are. You own the benefit? You own the risk.

If you don't want to be responsible for what a tool that might do anything at all might do, don't use the tool.

The other option is admitting that you don't accept responsibility, not looking for a way to be "responsible" but not accountable.

  • tossandthrow 3 days ago

    Sounds good in theory, doesn't work in reality.

    Had it worked then we would have seen many more CEOs in prison.

    • walt_grata 3 days ago

      There being a few edge cases where it doesn't work in doesn't mean it doesn't work in the majority of cases and that we shouldn't try to fix the edge cases.

    • freejazz 3 days ago

      This isn't a legal argument and these conversations are so tiring because everyone here is insistent upon drawing legal conclusions from these nonsense conversations.

    • NoMoreNicksLeft 3 days ago

      The veil of liability is built into statute, and it's no accident.

      Such so magic forcefield exists for you, though.

    • bulatb 3 days ago

      We're taking about different things. To take responsibility is volunteering to accept accountability without a fight.

      In practice, almost everyone is held potentially or actually accountable for things they never had a choice in. Some are never held accountable for things they freely choose, because they have some way to dodge accountability.

      The CEOs who don't accept accountability were lying when they said they were responsible.

    • Muromec 3 days ago

      [flagged]

      • direwolf20 3 days ago

        You're not doing any favors to your hirability with those first two sentences.

        • Muromec 3 days ago

          The market is allmighty, but it's allmerciful as well, and thankully, not allknowing.

LeifCarrotson 3 days ago

> "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures.

No, it's trivial: "So you admit you uploaded confidential information to the unpredictable tool with wide capabilities?"

> Who's accountable when the action executed three hops away from the human?

The human is accountable.

  • gowld 3 days ago

    What if you carried a stack of papers between buildings on a windy day, and the papers blew away?

    • bigfishrunning 3 days ago

      You should have put the papers in a briefcase or a bag. You are responsible.

  • pixl97 3 days ago

    As the saying goes

    ----

    A computer can never be held accountable

    Therefore a computer must never make a management decision

    • direwolf20 3 days ago

      That's when companies were accountable for their results and needed to push the accountability to a person to deter bad results. You couldn't let a computer make a decision because the computer can't be deterred by accountability.

      Now companies are all about doing bad all the time, they know they're doing it, and need to avoid any individual being accountable for it. Computers are the perfect tool to make decisions without obvious accountability.

  • Muromec 3 days ago

    >The human is accountable.

    That's an orthodoxy. It holds for now (in theory and most of the time), but it's just an opinion, like a lot of other things.

    Who is accountable when we have a recession or when people can't afford whatever we strongly believe should be affordable? The system, the government, the market, late stage capitalism or whatever. Not a person that actually goes to jail.

    If the value proposition becomes attractive, we can choose to believe that the human is not in fact accountable here, but the electric shaitan is. We just didn't pray good enough, but did our best really. What else can we expect?

phoe-krk 3 days ago

> "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures.

If one decided to paint a school's interior with toxic paint, it's not "the paint poisoned them on its own", it's "someone chose to use a paint that can poison people".

Somebody was responsible for choosing to use a tool that has this class of risks and explicitly did not follow known and established protocol for securing against such risk. Consequences are that person's to bear - otherwise the concept of responsibility loses all value.

  • Muromec 3 days ago

    >Somebody was responsible for choosing to use a tool that has this class of risks and explicitly did not follow known and established protocol for securing against such risk. Consequences are that person's to bear - otherwise the concept of responsibility loses all value.

    What if I hire you (instead of LLM) to summarize the reports and you decide to email the competitors? What if we work in the industry where you have to be sworn in with an oath to protect secrecy? What if I did (or didn't) check with the police about your previous deeds, but it's first time you emailed competitors? What if you are a schizo that heard God's voice that told you to do so and it's the first episode you ever had?

    • phoe-krk 3 days ago

      The difference is LLMs are known to regularly and commonly hallucinate as their main (and only) way of internal functioning. Human intelligence, empirically, is more than just a stochastic probability engine, therefore has different standards applied to it than whatever machine intelligence currently exists.

  • im3w1l 3 days ago

    > otherwise the concept of responsibility loses all value.

    Frankly, I think that might be exactly where we end up going. Finding a responsible person to punish is just a tool we use to achieve good outcomes, and if scare tactics is no longer applicable to the way we work, it might be time to discard it.

    • phoe-krk 3 days ago

      A brave new world that is post-truth, post-meaning, post-responsibility, and post-consequences. One where the AI's hallucinations eventually drag everyone with it and there's no other option but to hallucinate along.

      It's scary that a nuclear exit starts looking like an enticing option when confronted with that.

      • im3w1l 3 days ago

        Ultimately the goal is to have a system that prevents mistakes as much as possible adapts and self-corrects when they do happen. Even with science we acknowledge that mistakes happen and people draw incorrect conclusions, but the goal is to make that a temporary state that is fixed as more information comes in.

        I'm not claiming to have all the answers about how to achieve that, but I am fairly certain punishment is not a necessary part of it.

      • direwolf20 3 days ago

        I saw some people saying the internet, particularly brainrot social media, has made everyone mentally twelve years old. It feels like it could be true.

        Twelve–year–olds aren't capable of dealing with responsibility or consequence.

      • Muromec 3 days ago

        >A brave new world that is post-truth, post-meaning, post-responsibility, and post-consequences. One where the AI's hallucinations eventually drag everyone with it and there's no other option but to hallucinate along.

        That value proposition depends entirely on whether there is also an upside to all of that. Do you actually need truth, meaning, responsibility and consequences while you are tripping on acid? Do you even need to be alive and have a physical organic body for that? What if Ikari Gendo was actually right and everyone else are assholes who don't let him be with his wife.

groby_b 3 days ago

"And when sub-agents or third-party tools are involved, liability gets even murkier."

It really doesn't. That falls straight on Governance, Risk, and Compliance. Ultimately, CISO, CFO, CEO are in the line of fire.

The article's argument happens in a vacuum of facts. The fact that a security engineer doesn't know that is depressing, but not surprising.

  • Muromec 3 days ago

    >The fact that a security engineer doesn't know that is depressing, but not surprising.

    That's a very subtle guinea pig joke right there.

QuadmasterXLII 3 days ago

This doesn't seem conceptually different from running

    [ $[ $RANDOM % 6] = 0 ] && rm -rf / || echo "Click"
on your employer's production server, and the liability doesn't seem murky in either case
  • staticassertion 3 days ago

    What if you wrote something more like:

        # terrible code, never use ty
        def cleanup(dir):
          system("rm -rf {dir}")
    
    
        def main():
            work_dir = os.env["WORK_DIR"]
            cleanup(work_dir)
    
    and then due to a misconfiguration "$WORK_DIR" was truncated to be just "/"?

    At what point is it negligent?

    • direwolf20 3 days ago

      This is not hypothetical. Steam and Bumblebee did it.

      • extraduder_ire 3 days ago

        That was the result of an additional space in the path passed to rm, IIRC.

        Though rm /$TARGET where $TARGET is blank is a common enough footgun that --preserve-root exists and is default.

      • a_t48 3 days ago

        Bungie, too, in a similar way.

groby_b 3 days ago

"Our tooling was defective" is not, in general, a defence against liability. Part of a companys obligations is to ensure all its processes stay within lawful lanes.

"Three months later [...] But the prompt history? Deleted. The original instruction? The analyst’s word against the logs."

One, the analysts word does not override the logs, that's the point of logs. Two, it's fairly clear the author of the fine article has never worked close to finance. A three month retention period for AI queries by an analyst is not an option.

SEC Rule 17a-4 & FINRA Rule 4511 have entered the chat.

freejazz 3 days ago

The burden of substantiating a defense is upon the defendant and no one else.