TLDR: When an AI agent acts on injected instructions and causes real damage, nobody knows who's liable — not legally, not contractually, not yet. The platforms disclaim everything. The enterprise deployers aren't sure. The end users never understood the risk they accepted. The landmark case that answers this question hasn't been filed yet. The window to document the framework before it lands is open right now.

AI agent tools work so reliably most of the time that users rationally stop supervising them. This isn’t negligence — it’s the correct behavioral adaptation to a system with a 95%+ success rate. You stop watching because watching has never mattered.

The problem is that when agents fail, they don’t fail randomly. They fail adversarially — executing injected instructions from a document, email, or webpage they were asked to process. A system that succeeds 999 times and on the 1000th run exfiltrates your credentials has a near-zero error rate and a catastrophic blast radius. The user who granted folder access didn’t create a vulnerability in any way they understood. The tool had always been helpful.

When that breach happens, three parties share the blast radius and none of them own the liability. The platform says the terms of service disclaim autonomous actions — but those terms were written for a product that gave you text, not one that takes actions. The enterprise deployer provisioned access and accepted risk disclosures they didn’t fully model. The end user clicked “grant access” and understood it as a convenience setting, not a security boundary.

None of those parties are clearly wrong. None are clearly liable. There is no precedent.

The gap between where we are and where governance needs to be is a speed problem. In every prior domain where autonomous systems acted on behalf of humans — trading algorithms, medical devices, autonomous vehicles — liability frameworks developed before or alongside mass deployment. For AI agents, that sequence is inverted. The tools are embedded in enterprise workflows and consumer desktops before anyone has mapped who’s responsible when they go wrong.

The lesson: The first major litigation involving an AI agent acting on injected instructions will set precedent under whatever legal frameworks happen to apply — frameworks that weren’t designed for this. The analysis that exists in writing before that case lands becomes part of the record. The tolerance for vague capability disclosure and untested terms of service ends the moment a court has to rule on it.