Artificial intelligence (AI) is starting to gain a foothold in modern medicine, for tasks such as identifying anomalies in CT scans and other diagnostic images. But who is liable for mistakes made based on AI? The hospital? The physicians? The manufacturer of the system? Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania and scholar of artificial intelligence in radiology, explores some of the issues in an editorial for STAT.

Here’s a hypothetical situation to illustrate some of the legal uncertainties: Innovative Care Hospital, an early adopter of technology, decides to use AI instead of radiologists to interpret chest x-ray images as a way to reduce labor costs and increase efficiency. Its AI performs well, but for unknown reasons misses an obvious pneumonia and the patient dies from septic shock.

Who’ll get sued? The answer is, “It depends.”

If Innovative Care developed the algorithm in house, it will be liable through what’s known as enterprise liability. Though the medical center isn’t legally obliged to have radiologists oversee AI’s interpretation of x-rays, by removing radiologists from the process it assumes the risk of letting AI fly solo.

In this scenario, a suit would likely be settled. The hospital will have factored in the cost of lawsuits in its business model. If the case goes to trial, the fact that the hospital uses artificial intelligence to increase efficiency won’t likely be helpful for the hospital, even if the savings are passed to its patients. Efficiency can be framed as “cost cutting,” which juries don’t find as enticing as MBA students.

If Innovative Care had bought the algorithm from an AI vendor, the distribution of liability is more complex.

To read more, visit STAT.