AI Update: Artificial Intelligence and Products Liability

Global Aerospace Editorial Team, January 18, 2022
Innovation, Insurance Products

Author: Suzanne McNulty of Fitzpatrick & Hunt, Pagano, Aubert LLP

With a simple voice command, Siri provides you traffic and weather updates; your “self-driving” car may take you to a doctor’s appointment; a medical algorithm may assist in the interpretation of your chest X-ray; and a financial algorithm analyzes and makes recommendations affecting your finances.

artificial intelligence

Artificial intelligence (AI) systems are capable of perceiving, learning and problem-solving with little to no human intervention.1 Unlike conventional computer algorithms, AI systems can synthesize, store and analyze data to inform their decisions. Though AI can offer profound benefits to society, it also presents new risks and legal challenges in the realm of products liability. This novel issue poses the question: How is liability assessed when accidents occur not from human error or inherent defects, but from AI decisions?

AI and Damages

AI applications not only perform given tasks, but they also learn how to perform those tasks over time. This ability to learn means that AI behavior can be unpredictable despite the absence of flaws in its design and implementation.2

Who, then, is liable for AI’s actions when its decisions surprise even its creator, and its behavior was not necessarily foreseeable as part of its original programming? As of today, the law does not provide a clear answer for it is difficult to draw a distinction between damages resulting from the AI’s “free will” and that resulting from a genuine product defect.3 Despite this lack of guidance, one can examine and apply the existing legal framework for products liability law to AI systems to anticipate how this area of law may develop.

Tort Liability for AI Under Existing Legal Theories

Product liability claims are primarily based on negligence and strict products liability, the application of which to AI requires an adaptive approach.

Fundamentally, product liability claims depend on whether AI qualifies as a “product.” In Rodgers v. Christie (795 Fed. Appx. 878 (2020), plaintiff’s son was murdered by a man who days before had been granted pretrial release by a New Jersey state court. Rodgers brought product liability claims against the foundation responsible for the Public Safety Assessment (PSA), a multifactor risk estimation model that formed part of the state’s pretrial release system. The court held that the PSA did not qualify as a product, which it defined as “tangible personal property distributed commercially for use or consumption.”4 It noted that the AI in question was neither distributed commercially nor was it tangible personal property, reasoning that “’information, guidance, ideas, and recommendations’ are not ‘product[s].’”5 The court therefore dismissed the product liability claims.

Plaintiffs bringing negligence claims bear the additional burden of addressing whether AI itself can be held to a “reasonable person” standard.6 Applying the “reasonable computer” standard to AI may be difficult due to the insufficient knowledge of AI’s decision-making process. Also, because AI is not considered a “legal person,” it arguably cannot be held independently liable for negligence.7 One way that plaintiffs may navigate this issue is to argue for a vicarious liability scheme to hold an AI programmer liable for the actions of the AI.

Foreseeability of risk is another key element in product liability claims. In most U.S. jurisdictions, a designer, manufacturer or seller is considered negligent if they fail to use reasonable care to prevent a foreseeable risk of harm. However, it may prove difficult to apply this test to AI, as a plaintiff would have to show that the defendant knew or should have known that there was a foreseeable risk of harm. This will depend on the relevant industry standards of care and whether the AI programming was appropriate in light of those standards.8 Plaintiffs may have difficulty establishing this element because AI’s adaptive nature makes it unpredictable.

This unpredictability also presents a challenge for plaintiffs’ strict products liability claims. For instance, manufacturing defect claims require that the plaintiff show that the product was defective when it left the manufacturer’s possession. But, due to AI’s adaptive qualities and evolving independent decision-making capabilities, a plaintiff may have trouble proving that damage caused by AI was due to a defect that was present when it left the producer’s hands.

Strict liability for failure-to-warn claims requires plaintiffs to prove that the producer failed to warn consumers of known or knowable risks. Because AI’s evolutionary and independent nature may give rise to risks which are not known or knowable, the manufacturer/seller may not have known to provide certain warnings at all, making it challenging for plaintiffs to prevail on such claims.

Proving strict products liability based on design defect also presents challenges unless a plaintiff can show that the benefits provided by the disputed design were outweighed by the inherent risk of danger—the so-called “risk/benefit test,” which is utilized in some jurisdictions.

Consideration of AI-related products liability claims requires the careful balancing of holding tortious actors liable and encouraging technological innovation. Some argue against the imposition of strict liability, given the chilling effect it would have on the advancement of technology.

Guidance for Producers

Though AI-related tort law is an emerging field still in its infancy, producers can take steps to mitigate risks. First, they can allocate liability throughout their supply chains and to customers through the use of proper indemnities, limitations of liability, and warranties in their contracts. Second, they should consider documenting AI’s decision-making processes to show that their algorithms meet industry safety standards. Finally, producers should invest in AI testing campaigns and track real-world data to promote quality and safety and also conduct risk analyses.

Though AI offers promising solutions for safety, efficiency and productivity, its proliferation will undoubtedly give rise to damages that may not be redressable through traditional tort law. As the law adapts to address AI-related products liability claims, a careful balance must be maintained between properly compensating victims and encouraging innovation.


Resources
1 https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp
2 https://link.springer.com/article/10.1007/s43545-020-00043-z#author-information
3 https://is.muni.cz/el/law/podzim2017/MV735K/um/ai/Cerka_Grigiene_Sirbikyte_Liability_for_Damages_caused_by_AI.pdf page 386
4 62 NO. 9 DRI For Def. 48; Rodgers v. Christie, 795 F. App’x 878 (3d Cir. 2020)
5 Id.
6 62 NO. 9 DRI For Def. 48, 50-52
7 https://is.muni.cz/el/law/podzim2017/MV735K/um/ai/Cerka_Grigiene_Sirbikyte_Liability_for_Damages_caused_by_AI.pdf Page 383
8 https://cms.law/en/gbr/publication/artificial-intelligence-who-is-liable-when-ai-fails-to-perform