By Aine Cryts

Getting machine-learning scientists and radiologists to talk to each other is one of the key ways to advance artificial intelligence in radiology, Bennett Landman, PhD, professor of electrical engineering at Vanderbilt University, tells AXIS Imaging News.

These conversations are important because machine-learning scientists and radiologists need to work together to identify areas where artificial intelligence can be used to help patients. It’s just as important to engage the community in understanding the societal ramifications on privacy and fairness, adds Landsman, who has a secondary appointment in radiology and radiological sciences at Vanderbilt University.

Landman will discuss the ethics of artificial intelligence at the Society for Imaging Informatics in Medicine’s 2019 Conference on Machine Intelligence in Medical Imaging, which takes place in Austin, Texas, on September 23 and 24.

AXIS Imaging News recently discussed the ethics of artificial intelligence with Landman, as well as efforts to engage patients, caregivers, and physicians in these conversations.

AXIS Imaging News: What are two of the most challenging ethical issues facing artificial intelligence in radiology?

Bennett Landman: First, we need to consider the ethical implications of using data-driven approaches on thousands—or, perhaps soon, millions—of people. The field of artificial intelligence is rapidly growing and changing. With this innovation comes opportunity, but also uncertainty.

Our well-established rules for protecting human subjects and established criteria for research consent need to be interpreted in the context of new research designs. In addition, artificial intelligence methods are data-intensive and require the contributions of many subjects to learn properly. Therefore, we need to find approaches to engage and respect potential data donors in a manner that’s clear and efficient.

Second, we need to consider the manner in which artificial intelligence methods are used in practice and how these data are interpreted for the target audience. As we’re increasingly becoming aware, artificial intelligence algorithms learn to capture the trends and biases in their training data. For example, if the training data are all from one demographic, the results might not generalize to another healthcare organization.

We celebrate artificial intelligence when it can reveal patterns of association that were difficult to appreciate from the data, such as the associations of a lesion appearance with prognostic risk. Still, we must recognize that artificial intelligence can’t distinguish between biological factors and biases in data acquisition. Thus, we need to better understand the limitations of artificial intelligence and find clear and efficient ways of communicating these limitations so that practitioners can make informed use of artificial intelligence.

AXIS: What are specific ways that these two issues can be addressed?

Landman: To ensure that we have a proper level of consent, we need to have an open and transparent discussion about how these issues are being—or can be used—in the scientific specialties, as well as in the public sphere.

The grand challenges, such as this year’s effort with Kaggle, a Google-owned online community of data scientists and machine learners, SIIM, and the American College of Radiology, offer an opportunity for the public to experiment with the types of problems that are often encountered in research labs.

In addition, we need to engage all stakeholders in structured ways. For example, at Vanderbilt, we use Community Engagement Studio, a consultative session for researchers to capture input from patients, caregivers, healthcare providers, community members, and other non-researcher stakeholders at all stages from planning to implementation. These discussions are key to recognizing the diverse perspectives in how scientific data are used and how artificial intelligence could impact the practice of radiology in the future.

To ensure clarity in the ways artificial intelligence methods are interpreted in practice, we need community standards and best practices. These begin with consistent peer review and artificial intelligence characterization. Several of the artificial intelligence, imaging, and radiology societies are starting to come together to issue key policy statements.

These efforts should continue toward an open assessment of the performance of the algorithms. The existing platforms are impressive; they’re also advancing science. Nevertheless, artificial intelligence is notoriously unclear in interpreting its own uncertainty and biases. That means we need to find ways to communicate both of these key metrics in a way that non-artificial intelligence experts can understand and use in practice.