By Shannon Werb

Emergency_action In my May article, we discussed how Natural Language Processing (NLP) can help unlock qualitative data from unstructured clinical documentation, such as physician notes or radiology reports. I continue to be excited about NLP and how it can improve analytics offerings through the analysis of the physician’s final report. I’m equally excited about the potential for leveraging “deep learning” technology, also known as artificial intelligence, within radiology, helping to speed diagnosis for life-threatening conditions.

Deep learning is part of a broader family of machine-learning methods that involves using a large and complex data set to “teach” algorithms and build proper models that deliver highly accurate results in seconds. When combined with automated telemedicine workflows, such technology can work to unlock even further the potential for imaging data to improve the speed of access to clinical specialists for patients suffering from life-threatening conditions.

Radiologists are often called upon to quickly and accurately identify such life-threatening conditions at a moment’s notice and with limited patient history available at the time of diagnosis. Even with today’s highly efficient and automated radiology workflows, there often can be many minutes between images being generated—often CT scans used in emergency medicine to identify life-threatening conditions—and specialized radiologists looking at the images.

Now combine this with the use of imaging in the emergency room and outside normal hospital “business hours” where subspecialists may be absent, remote, or a limited resource backed up with a worklist of other emergent cases. How can a radiologist know that one of these cases is actually positive with a life-threatening condition—when seconds literally will make the difference between patient outcomes?

For example, intracranial hemorrhaging (IH) has the highest mortality rate of all stroke subtypes (Counsell et al 1995; Qureshi et al 2005). Hematoma growth is a principal cause of early neurological deterioration: Prospective and retrospective studies indicate that up to 38 percent of hematoma expansion is noted within three hours of IH onset, and hematoma volume is an important predictor of 30-day mortality (Brott et al 1997; Qureshi et al 2005).

Although IH accounts for only 15 percent of all strokes, it is one of its most disabling forms (Counsell et al 1995; Qureshi et al 2005). Greater than one-third of patients with IH will not survive; only 20% of patients will regain functional independence (Counsell et al 1995). This high rate of morbidity and mortality has prioritized research into new methods to identify and manage IH, including deep learning technology to help “red flag” images to indicate its potential presence.

So, what if technology could provide the radiologist with an indication that it suspects a specific study contains an acute finding based on the use of deep learning? Imagine if we applied deep learning technology to CT scans of the head in the first minutes between the patient being imaged and the radiologist putting eyeballs on the images—attempting to identify if IH is potentially present?

Now imagine if we combined that output with telemedicine workflow capabilities so that a radiologist is made aware of a suspected IH in one of the cases on his or her worklist? Imagine that the system advances such a case on the worklist—maybe even opens the case—and highlights an area of focus on the image for the radiologist, where the system actually suspects an IH to be present?

While this scenario may seem futuristic, vRad has filed a patent and is working to implement such a process for our clients and patients by the end of 2015. This kind of innovation can allow radiologists to spend more time being doctors and work more closely—and efficiently—with referring physicians to help improve patient outcomes.

With IH, time is the enemy; if radiologists can provide accurate diagnostic information minutes faster, imagine how this would benefit referring physicians, as well as patients and their families.

Editor’s Note: Click here for more information about deep learning in medical imaging.

###

Shannon Werb is chief information officer for Virtual Radiologic (vRad).