The Hallucination Defense
(niyikiza.com)56 points by niyikiza 3 days ago
56 points by niyikiza 3 days ago
The distinction some people are making is between copy/pasting text vs agentic action. Generally mistakes "work product" as in output from ChatGPT that the human then files with a court, etc. are not forgiven, because if you signed the document, you own its content. Versus some vendor-provided AI Agent which simply takes action on its own that a "reasonable person" would not have expected it to. Often we forgive those kinds of software bloopers.
"Agentic action" is just running a script. All that's different is now people are deploying scripts that they don't understand and can't predict the outcome of.
It's negligence, pure and simple. The only reason we're having this discussion is that a trillion dollars was spent writing said scripts.
If you put a brick on the accelerator of a car and hop out, you don't get to say "I wasn't even in the car when it hit the pedestrian".
This is true for bricks, but it is not true if your dog starts up your car and hits a pedestrian. Collisions caused by non-human drivers are a fascinating edge case for the times we're in.
If I hire an engineer and that engineer authorizes an "agent" to take an action, if that "agentic action" then causes an incident, guess whose door I'm knocking on?
Engineers are accountable for the actions they authorize. Simple as that. The agent can do nothing unless the engineer says it can. If the engineer doesn't feel they have control over what the agent can or cannot do, under no circumstances should it be authorized. To do so would be alarmingly negligent.
This extends to products. If I buy a product from a vendor and that product behaves in an unexpected and harmful manner, I expect that vendor to own it. I don't expect error-free work, yet nevertheless "our AI behaved unexpectedly" is not a deflection, nor is it satisfactory when presented as a root cause.
To me, it's 100% clear - if your tool use is reckless or negligent and results in a crime, then you are guilty of that crime. "It's my robot, it wasn't me" isn't a compelling defense - if you can prove that it behaved significantly outside of your informed or contracted expectations, then maybe the AI platform or the Robot developer could be at fault. Given the current state of AI, though, I think it's not unreasonable to expect that any bot can go rogue, that huge and trivially accessible jailbreak risks exist, so there's no excuse for deploying an agent onto the public internet to do whatever it wants outside direct human supervision. If you're running moltbot or whatever, you're responsible for what happens, even if the AI decided the best way to get money was to hack the Federal Reserve and assign a trillion dollars to an account in your name. Or if Grok goes mechahitler and orders a singing telegram to Will Stancil's house, or something. These are tools; complex, complicated, unpredictable tools that need skillfull and careful use.
There was a notorious dark web bot case where someone created a bot that autonomously went onto the dark web and purchased numerous illicit items.
https://wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww.bitnik.or...
They bought some ecstasy, a hungarian passport, and random other items from Agora.
>The day after they took down the exhibition showcasing the items their bot had bought, the Swiss police “arrested” the robot, seized the computer, and confiscated the items it had purchased. “It seems, the purpose of the confiscation is to impede an endangerment of third parties through the drugs exhibited, by destroying them,” someone from !Mediengruppe Bitnik wrote on their blog.
In April, however, the bot was released along with everything it had purchased, except the ecstasy, and the artists were cleared of any wrongdoing. But the arrest had many wondering just where the line gets drawn between human and computer culpability.
that darknet bot one always confuses me. The artists/programmers/whatever specifically instructed the computer, through the bot, to perform actions that would likely result in breaking the law. It's not a side-effect of some other, legal action which they were trying to accomplish, it's entire purpose was to purchase things on a marketplace known for hosting illegal goods and services.
If I build an autonomous robot that swings a hunk of steel on the end of a chain and then program it to travel to where people are likely to congregate and someone gets hit in the face, I would rightfully be held liable for that.
> To me, it's 100% clear - if your tool use is reckless or negligent and results in a crime, then you are guilty of that crime.
For most crimes, this is circular, because whether a crime occurred depends on whether a person did the requisite act of the crime with the requisite mental state. A crime is not an objective thing independent of an actor that you can determine happened as a result of a tool and then conclude guilt for based on tool use.
And for many crimes, recklessness or negligence as mental states are not sufficient for the crime to have occurred.
For negligence that results in the death of a human being, many legal systems make a distinction between negligent homicide and criminally negligent homicide. Where the line is drawn depends on a judgment call, but in general you're found criminally negligent if your actions are completely unreasonable.
A good example might be this. In one case, a driver's brakes fail and he hits and kills a pedestrian crossing the street. It is found that he had not done proper maintenance on his brakes, and the failure was preventable. He's found liable in a civil case, because his negligence led to someone's death, but he's not found guilty of a crime, so he won't go to prison. A different driver was speeding, driving at highway speeds through a residential neighborhood. He turns a corner and can't stop in time to avoid hitting a pedestrian. He is found criminally negligent and goes to prison, because his actions were reckless and beyond what any reasonable person would do.
The first case was ordinary negligence: still bad because it killed someone, but not so obviously stupid that the person should be in prison for it. The second case is criminal negligence, or in some legal systems it might be called "reckless disregard for human life". He didn't intend to kill anyone, but his actions were so blatantly stupid that he should go to prison for causing the pedestrian's death.
"computer culpability"
That idea is really weird. Culpa (and dolus) in occidental law is a thing of the mind, what you understood or should have understood.
A database does not have a mind, and it is not a person. If it could have culpa, then you'd be liable for assault, perhaps murder, if you took it apart.
If the five year old was a product resulting from trillions of dollars in investments, and the marketability of that product required people to be able to hand guns to that five year old without liability, then we would at least be having that discussion.
Purely organically of course.
> if you signed the document, you own its content. Versus some vendor-provided AI Agent which simply takes action on its own
Yeah that's exactly the I think we should adopt for AI agent tool calls as well: cryptographically signed, task scoped "warrants" that can be traceable even in cases of multi-agent delegation chains
Kind of like https://github.com/cursor/agent-trace but cryptographically signed?
> Agent Trace is an open specification for tracking AI-generated code. It provides a vendor-neutral format for recording AI contributions alongside human authorship in version-controlled codebases.
Similar space, different scope/Approach. Tenuo warrants track who authorized what across delegation chains (human to agent, agent to sub-agent, sub-agent to tool) with cryptographic proof & PoP at each hop. Trace tracks provenance. Warrants track authorization flow. Both are open specs. I could see them complementing each other.
Why does it need cryptography even? If you gave the agent a token to interact with your bank account, then you gave it permission. If you want to limit the amount it is allowed to sent and a list of recipients, put a filter that sits between the account and the agent that enforces it. If you want the money to be sent only based on the invoice, let the filter check that invoice reference is provided by the agent. If you did neither of that and the platform that runs the agents didn't accept the liability, it's on you. Setting up filters and engineering prompts it's on you too.
Now if you did all of that, but made a bug in implementing the filter, then you at least tried and wasn't negligible, but it's on you.
Actually, things are heading in a good direction re:AI bloopers.
Courts of law have already found that AI interactions with customers are binding, even if said interactions are considered "bloopers" by the vendor[1]
[1] https://www.forbes.com/sites/marisagarcia/2024/02/19/what-ai...
That's quickly becoming difficult to determine.
The workflow of starting dozens or hundreds of "agents" that work autonomously is starting to gain traction. The goal of people who work like this is to completely automate software development. At some point they want to be able to give the tool an arbitrary task, presumably one that benefits them in some way, and have it build, deploy, and use software to complete it. When millions of people are doing this, and the layers of indirection grow in complexity, how do you trace the result back to a human? Can we say that a human was really responsible for it?
Maybe this seems simple today, but the challenges this technology forces on society are numerous, and we're far from ready for it.
This is the problem we're working on.
When orchestrators spawn sub-agents spawn tools, there's no artifact showing how authority flowed through the chain.
Warrants are a primitive for this: signed authorization that attenuates at each hop. Each delegation is signed, scope can only narrow, and the full chain is verifiable at the end. Doesn't matter how many layers deep.
Except for the fact that that very accountability sink is relied on by senior management/CxO's the world over. The only difference is that before AI, it was the middle manager's fault. We didn't tell anyone to break the law. We just put in place incentive structures that require it, and play coy, then let anticipatory obedience do the rest. Bingo. Accountability severed. You can't prove I said it in a court of law, and skeevy shit gets done because some poor bloke down the ladder is afraid of getting fired if he doesn't pull out all the stops to meet productivity quotas.
AI is just better because no one can actually explain why the thing does what it does. Perfect management scapegoat without strict liability being made explicit in law.
Did I give the impression that the phenomena was unique to software? Hell, Boeing was a shining example of the principle in action with 737 MAX. Don't get much more "people live and die by us, and we know it (but management set up the culture and incentives to make a deathtrap anyway)." No one to blame of course. These things just happen.
Licensure alone doesn't solve all these ills. And for that matter, once regulatory capture happens, it has a tendency to make things worse due to consolidation pressure.
>AI is just better because no one can actually explain why the thing does what it does. Perfect management scapegoat without strict liability being made explicit in law.
AI is worse in that regard, because, although you can't explain why it does so, you can point a finger at it, say "we told you so" and provide the receipts of repeated warnings that the thing has a tendency of doing the things.
Yeah. Legal will need to catch up to deal with some things, surely, but the basic principles for this particular scenario aren't that novel. If you're a professional and have an employee acting under your license, there's already liability. There is no warrant concept (not that I can think of right now, at least) that will obviate the need to check the work and carry professional liability insurance. There will always be negligence and bad actors.
The new and interesting part is that while we have incentives and deterrents to keep our human agents doing the right thing, there isn't really an analog to check the non-human agent. We don't have robot prison yet.
I feel like you have missed the point of this. It isn't to completely absolve the user of liability, it's to prove malice instead of incompetence.
If the user claims that they only authorized the bot to review files, but they've warranted the bot to both scan every file and also send emails to outside sources, the competitors in this case, then you now have proof that the user was planning on committing corporate espionage.
To use a more sane version of an example below, if your dog runs outside the house and mauls a child, you are obviously guilty of negligence, but if there's proof of you unleashing the dog and ordering the attack, you're guilty of murder.
> It shouldn't matter, because whoever is producing the work product is responsible for it, no matter whether genAI was involved or not.
I hate to ask, but did you RTFA? Scrolling down ever so slightly (emphasis not my own) | *Who authorized this class of action, for which agent identity, under what constraints, for how long; and how did that authority flow?*
| A common failure mode in agent incidents is not “we don’t know what happened,” but:
| > We can’t produce a crisp artifact showing that a specific human explicitly authorized the scope that made this action possible.
They explicitly state that the problem is you don't know which human to point at.> They explicitly state that the problem is you don't know which human to point at.
The point is "explicitly authorized", as the article emphasizes. It's easy to find who ran the agent(article assumes they have OAuth log). This article is about 'Everyone knows who did it, but did they do it on purpose? Our system can figure it out'
You're right, they should be responsible. The problem is proving it. "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures.
And when sub-agents or third-party tools are involved, liability gets even murkier. Who's accountable when the action executed three hops away from the human? The article argues for receipts that make "I didn't authorize that" a verifiable claim
There's nothing to prove. Responsibility means you accept the consequences for its actions, whatever they are. You own the benefit? You own the risk.
If you don't want to be responsible for what a tool that might do anything at all might do, don't use the tool.
The other option is admitting that you don't accept responsibility, not looking for a way to be "responsible" but not accountable.
Sounds good in theory, doesn't work in reality.
Had it worked then we would have seen many more CEOs in prison.
> "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures.
No, it's trivial: "So you admit you uploaded confidential information to the unpredictable tool with wide capabilities?"
> Who's accountable when the action executed three hops away from the human?
The human is accountable.
You should have put the papers in a briefcase or a bag. You are responsible.
That's when companies were accountable for their results and needed to push the accountability to a person to deter bad results. You couldn't let a computer make a decision because the computer can't be deterred by accountability.
Now companies are all about doing bad all the time, they know they're doing it, and need to avoid any individual being accountable for it. Computers are the perfect tool to make decisions without obvious accountability.
>The human is accountable.
That's an orthodoxy. It holds for now (in theory and most of the time), but it's just an opinion, like a lot of other things.
Who is accountable when we have a recession or when people can't afford whatever we strongly believe should be affordable? The system, the government, the market, late stage capitalism or whatever. Not a person that actually goes to jail.
If the value proposition becomes attractive, we can choose to believe that the human is not in fact accountable here, but the electric shaitan is. We just didn't pray good enough, but did our best really. What else can we expect?
> "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures.
If one decided to paint a school's interior with toxic paint, it's not "the paint poisoned them on its own", it's "someone chose to use a paint that can poison people".
Somebody was responsible for choosing to use a tool that has this class of risks and explicitly did not follow known and established protocol for securing against such risk. Consequences are that person's to bear - otherwise the concept of responsibility loses all value.
>Somebody was responsible for choosing to use a tool that has this class of risks and explicitly did not follow known and established protocol for securing against such risk. Consequences are that person's to bear - otherwise the concept of responsibility loses all value.
What if I hire you (instead of LLM) to summarize the reports and you decide to email the competitors? What if we work in the industry where you have to be sworn in with an oath to protect secrecy? What if I did (or didn't) check with the police about your previous deeds, but it's first time you emailed competitors? What if you are a schizo that heard God's voice that told you to do so and it's the first episode you ever had?
The difference is LLMs are known to regularly and commonly hallucinate as their main (and only) way of internal functioning. Human intelligence, empirically, is more than just a stochastic probability engine, therefore has different standards applied to it than whatever machine intelligence currently exists.
> otherwise the concept of responsibility loses all value.
Frankly, I think that might be exactly where we end up going. Finding a responsible person to punish is just a tool we use to achieve good outcomes, and if scare tactics is no longer applicable to the way we work, it might be time to discard it.
"And when sub-agents or third-party tools are involved, liability gets even murkier."
It really doesn't. That falls straight on Governance, Risk, and Compliance. Ultimately, CISO, CFO, CEO are in the line of fire.
The article's argument happens in a vacuum of facts. The fact that a security engineer doesn't know that is depressing, but not surprising.
This doesn't seem conceptually different from running
[ $[ $RANDOM % 6] = 0 ] && rm -rf / || echo "Click"
on your employer's production server, and the liability doesn't seem murky in either caseWhat if you wrote something more like:
# terrible code, never use ty
def cleanup(dir):
system("rm -rf {dir}")
def main():
work_dir = os.env["WORK_DIR"]
cleanup(work_dir)
and then due to a misconfiguration "$WORK_DIR" was truncated to be just "/"?At what point is it negligent?
"Our tooling was defective" is not, in general, a defence against liability. Part of a companys obligations is to ensure all its processes stay within lawful lanes.
"Three months later [...] But the prompt history? Deleted. The original instruction? The analyst’s word against the logs."
One, the analysts word does not override the logs, that's the point of logs. Two, it's fairly clear the author of the fine article has never worked close to finance. A three month retention period for AI queries by an analyst is not an option.
SEC Rule 17a-4 & FINRA Rule 4511 have entered the chat.
FFIEC guidance since '21: https://www.occ.gov/news-issuances/bulletins/2021/bulletin-2...
I'm not sure I understand the point of this. If mistakes (or lies) aren't tolerable in the output, then whoever ran the agent should be responsible for reviewing the output and the resulting actions. I don't see how LLMs can be "responsible" for anything because they don't think in the way we do nor do they have motives in the way we do.
The "Hallucination Defense" isn't a defense because end of the day, if you ran it, you're responsible, IMO.
If one of my reports came to me with that defense, I'd write them up twice. Once for whatever they did wrong, and once for insulting my intelligence and wasting my time with that "defense".
On the contrary, if they just owned up to it, chances are I wouldn't even write them up once.
It's the same thing as saying "It's not my fault, it's a bug in the spreadsheet". The human is ultimately responsible. Heck, it's the same thing as someone saying "the computer made a mistake". Computers don't make mistakes. People do. The hallucination defense is not a defense at all and this is a non-issue. There's nothing to talk about here.
And if a professional wants to delegate their job to non-deterministic software they're not professionals at all. This overreliance on LLMs is going to have long-term consequences for society.
Two things:
1. I vaguely recall that in the early days of Windows, the TOS explicitly told users they were not to use it for certain use-cases and the wording was something like "we don't warranty windows to be good for anything, but we EXPLICITLY do not want you to use Windows to do nuclear foo".
I expect that if the big LLM vendors aren't already doing this, they soon will. Kind of like how Fox News claims that some of heads on the screen are not journalists at all but merely entertainers (and you wouldn't take anything they say seriously ergo we're not responsible for stuff that happens as a result of people listening to our heads on the screen).
2. IANAL but I believe that in most legal systems, responsibility requires agency. Leaving aside that we call these things "agents" (which is another thing I suspect will change in the marketing), they do not have agency. As a result they must be considered tools. Tools can be used according to the instructions/TOS, or not. If not - whatever it does is down to you. If used within guidelines - it's the manufacturer.
So my conclusion is that the vendors - who have made EPIC bets on getting this tech into the hands of as many folks as possible and making it useful for pretty much anything you can think of - will be faced with a dilemma. At the moment, it seems like they believe that (just like the infamous Ford bean counters) the benefits of not restricting the TOS will far outweigh any consequences from bad things happening. Remains to be seen.
A useful phrase for this when it's an intentional/permitted design: "Accountability Sinks."
Or perhaps scapegoating at scale.
https://news.ycombinator.com/item?id=43877301 - 398 comments
https://news.ycombinator.com/item?id=41891694 - 308 comments
This is some absolute BS. In the current day and age you are 1000% responsible for the externalities of your use of AI.
Read the terms and conditions of your model provider. The document you signed, regardless if you read or considered it, explicitly removes any negative consequences being passed to the AI provider.
Unless you have something equally as explicit, e.g. "we do not guarantee any particular outcome from the use of our service" (probably needs to be significantly more explicitly than that, IANAL) all responsibility ends up with the entity who itself, or it's agents, foists unreliable AI decisions on downstream users.
Remember, you SIGNED THE AGGREMENT with the AI company the explicitly says it's outputs are unreliable!!
And if you DO have some watertight T&C that absolves you of any responsibility of your AI-backed-service, then I hope either a) your users explicitly realize what they are signing up for, or b) once a user is significantly burned by your service, and you try to hide behind this excuse, you lose all your business
T&Cs aren't ironclad.
One in which you sell yourself into slavery, for example, would be illegal in the US.
All those "we take no responsibility for the [valet parking|rocks falling off our truck|exploding bottles]" disclaimers are largely attempts to dissuade people from trying.
As an example, NY bans liability waivers at paid pools, gyms, etc. The gym will still have you sign one! But they have no enforcement teeth beyond people assuming they're valid. https://codes.findlaw.com/ny/general-obligations-law/gob-sec...
So I can pass on contact breaches due to bugs in software I maintain due to hallucinations by the AI that I used to write the software?? Absolutely no way.
"But the AI wrote the bug."
Who cares? It could be you, your relative, your boss, your underling, your counterpart in India, ... Your company provided some reasonable guarantee of service (whether explitly enumerated in a contact or not) and you cannot just blindly pass the buck.
Sure, after you've settled your claim with the user, maybe TRY to go after the upstream provider, but good luck.
(Extreme example) -- If your company produces a pacemaker dependent on AWS/GCP/... and everyone dies as soon as cloudflare has a routing outage that cascades to your provider, oh boy YOU are fucked, not cloudflare or your hosting provider.
What a stupid article from someone that has no idea when liability attaches.
It is the burden of a defendant to establish their defense. A defendant can't just say "I didn't do it". They need to show they did not do it. In this (stupid) hypothetical, the defendant would need to show the AI acted on its own, without prompting from anyone, in particular, themselves.
It's not a legal defense at all.
Licensed professionals are required to review their work product. It doesn't matter if the tools they use mess up--the human is required to fix any mistakes made by their tools. In the example given by the blog, the financial analyst is either required to professional review their work product or is low enough that someone else is required to review their work product. If they don't, they can be held strictly liable for any financial losses.
However, this blog post isn't about AI Hallucinations. It's about the AI doing something else separate from the output.
And that's not a defense either. The law already assigns liability in situations like this: the user will be held liable (or more correctly: their employer, for whom the user is acting as an agent). If they want to go after the AI tooling (i.e., an indemnification action) vendor the courts will happily let them do so after any plaintiffs are made whole (or as part of an impleader action).
This is an advertisement for a 'tenuo warrant'. So, I read its document[0]. Put simply, it works like this:
1. A person orders an AI agent to do A.
2. The agent issues a tenuo warrant for doing A.
3. The agent can now only use the tool to perform A.
The article is about that 'warrant' can now be used in case of an incident because it contains information such as 'who ordered the task' and 'what authority was given'.
I get the idea. This isn't about whether a person is responsible or not(because of course they are). It's more about whether it was intentional.
However... wouldn't it be much easier to just save the prompt log? This article is based entirely on "But the prompt history? Deleted."(from the article) situation.
You've got the model right. And saving prompt logs does help with reconstruction.
But warrants aren't just "more audit data." They're an authorization primitive enforced in the critical path: scope and constraints are checked mechanically before the action executes. The receipt is a byproduct.
Prompt logs tell you what the model claimed it was doing. A warrant is what the human actually authorized, bound to an agent key, verifiable without trusting the agent runtime.
This matters more in multi-agent systems. When Agent A delegates to Agent B, which calls a tool, you want to be able to link that action back to the human who started it. Warrants chain cryptographically. Each hop signs and attenuates. The authorization provenance is in the artifact itself.
But the AI agent still needs to determine which tool is necessary to mint the warrant. What happens if the agent makes a mistake when making warrant?
A worker agent doesn't mint warrants. It receives them. Either it requests a capability and an issuer approves, or the issuer pushes a scoped warrant when assigning a task. Either way, the issuer signs and the agent can only act within those bounds.
At execution time, the "verifier" checks the warrant: valid signatures, attenuation (scope only narrows through delegation), TTL (authority is task-scoped), and that the action fits the constraints. Only then does the call proceed.
This is sometimes called the P/Q model: the non-deterministic layer proposes, the deterministic layer decides. The agent can ask for anything. It only gets what's explicitly granted.
If the agent asks for the wrong thing, it fails closed. If an overly broad scope is approved, the receipt makes that approval explicit and reviewable.
Anyone using AI tools should assume that at least 20% of what it produces will be wrong: Missing details, inventing things, basic mistakes.
Use AI if being 80% right quickly is fine. Otherwise if you have to do the analysis anyway because accuracy is critical, there's little point to the AI - its too unreliable.
If an employee does something during his employment, even if he wasn't told directly to do it, the company can be held vicariously liable, how is this any different?
"The company can be held vicariously liable" means that in this analogy, the company represents the human who used AI inappropriately, and the employee represents the AI model that did something it wasn't directly told to do.
Nobody tries to jail the automobile being driven when it hits a pedestrian when on cruise control. The driver is responsible for knowing the limits of the tool and adjusting accordingly.
IMO everyone is missing the point of this thing. It's not an auth system or security boundary, it doesn't provide any security guarantees whatsoever, it doesn't do anything. The entire point is to cover a company's derriere should their agentic security apparatus (or lack thereof) fail to prevent malicious prompt injection etc.
This way, they can avoid being legally blamed for stuff-ups and instead scapegoat some hapless employee :-) using cryptographic evidence the employee "authorized" whatever action was taken
How does the old proverb go?
> A computer must never make a management decision, because a computer cannot be held accountable.
I don't understand what has changed here.
AI is just branding. At the end of the day it's still just people using computer software to do stuff.
There is going to be a person who did a thing at the end of the day -- either whoever wrote the software or whoever used the tool.
The fact that software inexplicably got unreliable when we started stamping "AI" on the box shouldn't really change anything.
What problem is this guy trying to solve? Sorry, but in the end, someone's gonna have to be responsible and it's not gonna be a computer program. Someone approved the program's use, it's no different to any other software. If you know agent can make mistakes then you need to verify everything manually, simple as.
a machine can never be held accountable
but the person who turned it on can
simple as
Shouldn't all agentic actions with meaningful outputs of importance like moving $48,000 simply be required to terminate in a human designed or verified output with a human in the loop attestation.
Eg a list of transactions that isn't AI generated where the only actions that actually move money must operate on the data displayed in the human designed page.
A human looks at this and says yes that is acceptable and becomes reasonable for that action.
If AI were good enough to detect hallucinations wouldn't that be built into the AI already?
Exactly ... and that's why I'm skeptical of "AI verifies AI" as the primary safety mechanism. The verifier for moving money should be deterministic: constraints, allowlists, spend limits, invoice/PO matching, etc. The LLM can propose actions, but the execution should be gated by a human/polic-issued scope that's mechanically enforced. That's the whole point: constrain the non-deterministic layer with a deterministic one. [0] [0] https://tenuo.dev/constraints
> “The AI hallucinated. I never asked it to do that.”
> That’s the defense. And here’s the problem: it’s often hard to refute with confidence.
Why is it necessary to refute it at all? It shouldn't matter, because whoever is producing the work product is responsible for it, no matter whether genAI was involved or not.