The Liability Gap: When AI Agents Act, Who's Responsible?
There’s a liability gap forming in AI agent security, and the industry isn’t talking about it clearly enough. The gap isn’t primarily technical. The attack techniques are understood — prompt injection, tool misuse, over-permissioned agents acting on adversarial instructions. What isn’t understood is the legal and organizational question underneath: when an AI agent acts autonomously on injected instructions and causes real damage, who owns that outcome? The honest answer right now is: nobody knows. There’s no legal precedent. The terms of service are written to disclaim everything. And the regulatory frameworks that might eventually clarify this haven’t arrived yet. ...