Physical AI Is Deploying Into a Liability Vacuum
Physical AI is being deployed into real environments under contracts that were never designed for it. The indemnification language is borrowed from SaaS agreements. The liability caps are set based on deal size, not operational risk. And somewhere in that document, the question of who owns the outcome when the system makes a wrong decision in the physical world got quietly skipped.
This is not a hypothetical problem. It is a right-now problem that most Physical AI companies are ignoring until they cannot.
The Nexus That Changes Everything
When OpenAI or Anthropic causes harm, you are dealing with a software failure in a digital environment. The legal frameworks for that are imperfect, but they have a conceptual home: technology errors and omissions, product liability for software, negligence. Courts are beginning to work through those cases.
Physical AI is different in a way that matters. When a Physical AI system causes harm, you have a software decision executing through a physical body in a real environment. You cannot cleanly separate the two. Was it a model failure? A hardware defect? An environmental condition the system was never trained to handle? That question has no clean answer under current legal doctrine, and it is why standard product liability and standard tech E&O both fall short. You need both, and even together they leave gaps.
The Incidents Are Already Here
In December 2023, a Tesla software engineer at the company's Austin factory was attacked by a malfunctioning robot that dug its claws into his back and arm. The incident did not surface publicly until it leaked. In November 2023, a South Korean worker was fatally crushed by an industrial robot that classified him as a box of vegetables. The hardware worked exactly as designed. The AI made a wrong call with a fatal outcome.
In both cases, the question of who was responsible was not settled by the contract. It was settled by whoever had the deepest pockets and the most exposure when lawyers arrived.
Neither of those incidents involved a three-party deployment. When the developer, the hardware manufacturer, and the operating customer are three separate companies, which is the typical Physical AI deal structure, the liability fog gets considerably thicker. All three parties have partial ownership of the conditions that led to an incident. None of them has a contract that says so clearly.
The Insurance Market Knows It Has a Problem
The insurance industry is not indifferent to Physical AI. The premium volume this space will eventually generate is enormous, and carriers are watching closely. The problem is underwriting. Actuaries need loss history to price risk accurately. Physical AI at scale is too new to have meaningful loss data. What you get instead is a Swiss cheese market: some carriers will write the policy, some will not, some will price it reasonably, and some will price it punitively with no logic you can follow. Two nearly identical deployments can produce wildly different insurance outcomes depending entirely on which underwriter you land on and how they have chosen to classify your system. It is not a closed market. It is a market that does not know what you are yet.
A few carriers are trying to get ahead of this. Armilla AI, backed by Lloyd's of London syndicates, launched coverage in 2025 that explicitly addresses AI-specific failures including model degradation and algorithmic malfunctions. AXIS Capital and Founder Shield have both introduced robotics-specific products because they recognized that traditional policies no longer map onto autonomous, cyber-physical systems. These are early movers in a market that is still finding its footing, and their willingness to write the policy does not mean the coverage is complete. It means the conversation has started.
Meanwhile, the ISO, the advisory body that sets standard policy forms for the industry, introduced generative AI exclusions for commercial general liability policies. Some of the largest carriers are quietly seeking permission to exclude AI liabilities from standard corporate coverage entirely. The market is simultaneously opening and narrowing, and most Physical AI companies are sitting somewhere in the middle of that without realizing it.
Clarity Is a Sales Asset
Enterprise buyers, particularly in regulated industries, are doing their own liability analysis before signing deployment contracts. A Physical AI company that walks into that conversation with a clean, plain-English breakdown of how risk is allocated across parties does not just protect itself legally. It shortens the sales cycle. It signals operational maturity. And when the first serious public incident resets the industry's contracting norms, the companies that had already done the work will look like the adults in the room.
The companies that have not done it will be rewriting contracts at the worst possible moment.