By Aine Cryts
Image display is one of the main technological considerations when it comes to providing remote-reading options to radiologists, according to Don K. Dennison, FSIIM, a medical-imaging consultant based in Waterloo, Ontario. That is, of course, provided that the IT team can ensure the fast, secure display of data in the reading worklist, report creator, and electronic health record (EHR) patient history applications, he adds.
While most of the images interpreted by radiologists are compressed to minimize the time it takes to transmit them, radiology practices and departments have options when designing high-speed image display, according to Dennison. He outlines four common designs for image-display architecture:
1. Series and image optimization
This is a common technique that uses application rules—for example, display protocols—and exam metadata to determine the appropriate series and images required for initial display. This technique then prioritizes the download of images to the image-viewing client.
In addition, if the user accesses an image somewhere in a series—for example, the middle view in a stack of axial CT images—application logic will trigger the immediate download of images above and below the target. The reasonable assumption is that the user may scroll up or down from that point, he explains.
2. Progressive image display
This option generally relies on wavelet image compression, a method of storing radiology images. The image-viewer client uses a special communication protocol to request the appropriate image data from the server to render the image on the screen. Since the image size painted on the screen is often smaller than the stored image, there’s less data transferred; this translates to faster image-display times.
Specifically for the IT specialists on the team, it’s important to realize that subsequent requests from the image-viewer client to the server will receive additional image data until the complete image resolution is available. He explains that application logic can optimize speed or bandwidth minimization, depending on the system design and usage scenario. In addition, encoding and decoding image data in wavelet format is more computationally intense than other formats, so the hardware plan must take this into consideration, advises Dennison.
3. Precaching image data on the image-viewer client
This technique uses information, such as an exam that’s added to a reading worklist, to trigger data that should be moved from the servers to one or more workstations. In this scenario, the image data is cached on the hard drive of the workstation computer and, when the radiologist opens the case, it loads quickly. This design works well only when specific conditions are met, such as:
- The workstation where the radiologist reads the images is known; this enables the images to be sent to the appropriate workstation. Otherwise, the image data could be moved to many workstations—and that’s a waste of IT resources and an additional burden on servers, Dennison says.
- In reading environments where the worklist is a separate application from the image-display application. That’s the case when the worklist is provided by the EHR or radiology information system and the image is displayed by the picture archiving and communication system. In this scenario, the worklist typically can’t relay to the image-display application the subsequent exams on the worklist; this means there’s no trigger to initiate the image data precache, adds Dennison.
In addition, information security staff need to ensure that any imaging data—all of which is protected health information—is encrypted on the workstation and purged after the examination is interpreted. This step ensures a secure environment, which is important if the workstation hardware is compromised or stolen, according to Dennison.
4. Server-side rendering
Using similar concepts as visual design infrastructure, this design loads and processes data on the server. This means all transforms, such as window width/level values, and image processing, such as multiplanar reconstruction or 3D, are done by the server. Subsequently, these outputs are sent to the rendered images on the image-viewer client. This design typically requires high-end and sometimes specialized software, such as graphical processing units, to do the processing. One benefit of this architecture? It reduces the requirements for hardware at the workstation, says Dennison.
Historically, these types of systems couldn’t run all application processes on virtual machines, due to the dependency on the rendering hardware. The good news? This functionality is improving in some systems, Dennison says.
Aine Cryts is a contributing writer for AXIS Imaging News.