Sections

Commentary

When medical robots fail: Malpractice principles for an era of automation

A robot helping medical teams treat patients suffering from the coronavirus disease (COVID-19) is pictured at the corridor, in the Circolo hospital, in Varese, Italy April 1, 2020. REUTERS/Flavio Lo Scalzo

In recent years, researchers have developed medical robots and chatbots to monitor vulnerable elders and assist with some basic tasks. Artificial intelligence-driven therapy apps aid some mentally ill individuals; drug ordering systems help doctors avoid dangerous interactions between different prescriptions; and assistive devices make surgery more precise and more safe—at least when the technology works as intended. And these are just a few examples of technological change in medicine.

The gradual embrace of AI in medicine also raises a critical liability question for the medical profession: Who should be responsible when these devices fail? Getting this liability question right will be critically important not only for patient rights, but also to provide proper incentives for the political economy of innovation and the medical labor market.

Medical innovation and preventable errors

Consider the case of robotically assisted surgical devices (RASDs), which surgeons use to control small cutting devices instead of wielding ordinary scalpels. If a surgeon’s hand slips with a scalpel and a vital tendon is cut, our intuitive sense is that the surgeon bears primary responsibility for the error. But what if the surgeon is using an RASD that is marketed as having a special “tendon avoidance subroutine,” similar to the alarms that automobiles now sound when their sensors indicate an imminent collision? If the tendon sensors fail and the warning does not sound before an errant cut is made, may the harmed patient sue the vendor of the RASD? Or only the physician who relied on it?

Similar problems arise in the context of some therapy apps. For example, a counselor may tell a patient with a substance use disorder to use an app in order to track cravings, states of mind, and other information helpful in treating addiction. The app may recommend certain therapeutic actions in case the counselor cannot be reached. Setting aside preemption issues raised by Food and Drug Administration regulation of these apps, important questions in tort law arise. If these therapeutic actions are contraindicated and result in harm to the patient or others, is the app to blame? Or does the doctor who prescribed the app bear the blame?

In neither the surgical nor the mental health scenario is the answer necessarily binary. There may be shared liability, a sliding scale based on an apportionment of responsibility. But before courts can trigger such an apportionment, they must have a clear theory upon which to base the responsibility of vendors of technology.

As they develop such a theory, one tool on offer is the distinction between substitutive and complementary automation. As law and political economy methods demonstrate, law cannot be neutral with respect to markets for new technology. It constructs these markets, making certain futures more or less likely. As explored extensively in my book New Laws of Robotics, distinguishing between technology that substitutes for human expertise and that which complements professionals is fundamental to both labor policy and the political economy of automation. To promote more accountability for AI vendors, while supporting the domain expertise of physicians, tort doctrine should recognize a critical distinction: When AI and robotics substitutes for physicians, replacing them entirely, strict liability is more appropriate than standard negligence doctrine.

Standards for liability

Under a strict liability standard, in the case of an adverse event, the manufacturer, distributor, and retailer of the product may be liable, even if they were not negligent. In other words, even a system that was well-designed and implemented may still bear responsibility for error. Think, for instance, of a manufacturing process which, while well-designed, still via some inadvertent mistake or happenstance ended up producing a defective product that harmed someone. In such a case, strict product liability can result in a judgment against the manufacturer, even without a finding of negligence. This may seem like an unduly harsh standard. However, the doctrine incentivizes ongoing improvements in technology, which could remain unduly error-prone and based on outdated or unrepresentative data sets if tort law sets unduly high standards for recovery.

 A strict liability standard would also function to deter the premature automation of fields where human expertise is still sorely needed. In the medical field, there has long been a standard of competent professional supervision and monitoring of the deployment of advanced technology. When substitutive automation short-circuits that review and a preventable adverse event occurs, compensation is due. The amount of compensation may be limited by state legislatures to avoid over-deterring innovation. But compensation is still due because a “person in the loop” might have avoided the harm.

Even when robotics and AI only complement a professional, there still need to be opportunities for plaintiffs and courts to discover whether the technology’s developers and vendors acted reasonably. Such inquiry is endangered by expansive interpretations of the learned intermediary doctrine, which holds that the manufacturer of a new technology “discharges their duty of care to consumers by providing adequate warnings” about its potential for harm to professionals using the technology. As the example of the tendon-cutting device showed, all responsibility for an error should not rest on a doctor when complementary robotics fails to accomplish what it promised to do. To hold otherwise would again be an open invitation to technologists to rest on their laurels.

Tort traditionalists might argue that surgeons should remain primarily responsible, since they can pressure AI vendors to improve their products if these products’ faults provoke lawsuits against those that use them. However, thanks to both lax U.S. antitrust policy and strict intellectual property laws, AI and robotics has become a concentrated field. As law and political economy teaches, it is unwise to seek recourse in “market dynamics” when concentration or other infirmities render a market less than optimally responsive to consumer demand or wise industrial policy. While older law and economics approaches focused almost entirely on microeconomic dynamics, new economic analysis of law includes macroeconomic dynamics as well, such as labor market effects, professional pipelines, and distributed expertise. All of these counsel in favor of a stricter standard for machines that promise to substitute for medical professionals, rather than complementing them.

To understand how legal liability might function in a case of complementary automation, consider the use of computerized physician order entry (CPOE) for prescriptions. This software manages prescription orders and provides alerts to doctors about how different drugs prescribed to a patient might interact with one another. These “drug-drug interaction” (DDI) alerts can warn a physician about the possible side effects of taking two pills at once. This is a case of complementary automation, and if the DDI alert were incorrect, a harmed patient ideally would be able to sue both the physician and the vendor of the CPOE system. To prove negligence, a patient would need to prove that the maker of the CPOE system failed to follow the proper standard of care in updating data or improving algorithms in order to avoid the problem. Though the physician might still bear all or most of the responsibility, keeping the relevant technology “in the mix” of liability determinations will help ensure ongoing improvements in the software.

By contrast, a different CPOE system might simply “decide everything” with respect to the prescription of the two pills, preventing the doctor from prescribing them together. In such a scenario, the physician is no longer responsible—she or he cannot override the system. Given this extraordinary deviation from ordinary professional standards in medicine—which require a skilled person to mediate between technology and the patient—it is appropriate to impose strict liability on the vendor and developer of the substitutive AI if, for instance, the two pills were truly necessary, and the failure to prescribe them harmed the patient. Corporate liability for the hospital that purchased the system is also an option.

Incentivizing improvement in AI and medical practice

The less demanding standards of liability imposed on AI and robotics are, the more the health care system will tend toward the diminution of the distributed expertise so critical to medical advance. So, policymakers would do well to ensure that AI’s developers and vendors take responsibility for the functions their technology is usurping.

Even if technologists develop fully autonomous robot doctors and surgeons, the ultimate “backup system” would be a skilled human surgeon with some experience, flexibility, and creativity. Our aim should not be to replace such individuals, but to aid in their efficiency and effectiveness, both to improve safety now and to promote future innovation by ensuring a distributed workforce of skilled professionals that monitors and suggests improvements for medical technology.

The sequence and shape of automation in health care cannot simply be dictated from on high by engineers. Rather, domain experts (including physicians, nurses, patients’ groups, and other stakeholders) need to be consulted, and they need to buy into a larger vision of progress in their field. Perhaps more of medicine should indeed be automated, but we should ensure that the medical workforce is a lasting partner in that process. In most cases, they should be helped, not replaced, by machines—both for present aims (such as overriding errant machines), and for the future (to develop new and better ones).

As courts develop such evolving standards of care, they will also face predictable efforts by owners of AI to deflect liability. Policymakers are struggling to keep pace with the speed of technological development. Legislators are fearful of inhibiting growth and innovation in the space. However, there is increasing public demand for policy interventions and protections regarding critical technology. These demands do not necessarily impede economic or technological advance. Some innovation may never get traction if customers cannot be assured that someone will be held accountable if an AI or robot catastrophically fails. Developing appropriate standards of responsibility along the lines prescribed above should reassure patients while advancing the quality of medical AI and robotics.

Frank Pasquale is author of New Laws of Robotics: Defending Human Expertise in the Age of AI. He is a professor of law at Brooklyn Law School.

Authors