A new approach to evaluating tumor images reduced the kinds of errors that can distort results of cancer clinical trials, according to a study1 from the University of Michigan (UM). The approach, which was first implemented at UM in 2016, could serve as a model for improving imaging assessments at other cancer centers, the researchers say.

In clinical cancer trials, the effectiveness or new drugs or other treatments are often evaluated based on measurements of changes seen in patients’ tumors using a variety of imaging techniques—including MRI, CT, and PET scans. However, individual variability and unintentional bias on the part of the physicians interpreting the scans—usually medical oncologists—can skew the results. This has the potential to make new therapies seem either better or worse than they actually are. It can also affect patient care, leading to patients being taken off of or staying on trials when they shouldn’t.

To limit these distortions in image interpretation, researchers at the Rogel Cancer Center at UM developed a tumor response assessment core (TRAC), a web-based application that allows trained image analysts teamed with radiologists with specialized experience (for example, with lung cancers) to review the images.

“At any given moment, we have hundreds of people enrolled in clinical trials at our cancer center,” says Vaibhav Sahai, MBBS, a medical oncologist at Michigan who led the development of the new approach. “Before TRAC, the majority of the imaging analyses were done by medical oncologists, and this is very common across the country.”

But, Sahai notes, there are two main drawbacks to this approach: That is, doctors who specialize in diagnosing and treating cancer patients usually don’t receive the same degree of specialized training in quantitative imaging analysis as their colleagues in radiology and nuclear medicine. And it’s difficult, on top of a busy caseload of patients, to be deeply versed in the many evaluation methods employed across different trials.

“Different trials use different measurements of response, depending on the cancer type and the drug type,” Sahai says. “We have an investigator-initiated trial open with a drug that may cause tumor swelling as a result of damage to the cancer. This treatment response might be interpreted as progression, and one could end up thinking the drug has no value unless you check for tumor density or functional activity. Tumor swelling or ‘pseudo-progression’ is also possible in patients receiving immunotherapy medications, and correct use and application of response assessment criteria is crucial for accurate assessment of our patients receiving these novel drugs on clinical trials.”

The second drawback is that medical oncologists’ familiarity with their patients may lead them, unintentionally, to be biased in their evaluations. “We care about our patients. We want them to do well. We want to keep them on trials. We want to believe our care is helping them,” Sahai says. “So, it can be sometimes hard to do an unbiased assessment—which is what the clinical trial and the patient both deserve.”

To improve clinical trial response assessments, researchers at the Rogel Cancer Center launched TRAC in 2016. Through a web-based application, medical images are reviewed by image analysts, a new role staffed with highly trained professionals who perform preassessments of the scans along with other duties. The images are then reviewed by a radiologist with specific expertise in that particular type of cancer. The process also includes a method for enlisting outside input to help resolve disagreements or ambiguities.

For a current study to evaluate the effectiveness of TRAC, researchers used records for 49 lung cancer patients treated at UM between 2005 and 2015, before the new system was in place. The patients’ imaging scans were sent through the TRAC process, where they were reviewed by an image analyst and two board-certified radiologists; another radiologist also performed a separate, independent review. These results were then compared to the medical oncologists’ original assessments.

The study showed that using TRAC did indeed lead to better measurements. “We found substantial agreement between the TRAC analysis and the radiologists’ evaluations,” says Sahai. “We found only moderate agreement between the assessments by medical oncologists and TRAC. These differences have the potential to affect patient treatment and trial outcomes.”

As an added benefit, the new approach greatly improved the efficiency for analysis of cancer clinical trials at UM. The turnaround time for tumor measurements decreased from 33 days to 3 days, the study team reported.

At the time when the paper documenting the study was published, TRAC had been used in more than 175 clinical trials across many types of cancer, assisting with assessments of more than 1,500 scans.

“The mission of TRAC was to create independent, unbiased, and verifiable measurements of our patients’ response during clinical trials, and the results of our study show that this approach lives up to that goal,” Sahai says. “We published a detailed explanation of the workflow and the software we created in hopes of being a model for other cancer centers, and thus to help improve the accuracy of clinical trial results for patients everywhere.”

Reference:

  1. Hersberger KE, Mendiratta-Lala M, Fischer R, et al. Quantitative imaging assessment for clinical trials in oncology. J Natl Compr Canc Netw. 2019;17(12):1505-1511. doi:10.6004/jnccn.2019.7331.

Featured image:

Images from CR and MRI scans. Photo © Oliver Sved, courtesy Dreamstime.com (ID 18853388).