The Radiological Society of North America (RSNA) has launched its third annual artificial intelligence (AI) challenge: the RSNA Intracranial Hemorrhage Detection and Classification Challenge. The AI Challenge is a competition among researchers to create applications that perform a defined task according to specified performance measures. Last year’s pneumonia detection challenge had more than 1,400 teams.

“The goal of an AI challenge is to explore and demonstrate the ways AI can benefit radiology and improve clinical diagnostics,” says Luciano Prevedello, MD, MPH, chair of the Machine Learning Steering Subcommittee of the RSNA Radiology Informatics Committee. “By organizing these data challenges, RSNA plays a critical role in demonstrating the capabilities of machine learning and fostering the development of AI in improving patient care.”

This year, researchers are working to develop algorithms that can identify and classify subtypes of hemorrhages on head CT scans. The data set, which comprises more than 25,000 head CT scans contributed by several research institutions, is the first multiplanar dataset used in an RSNA AI Challenge.

The Machine Learning Steering Subcommittee worked with volunteer specialists from the American Society of Neuroradiology (ASNR) to label these exams for the presence of five subtypes of intracranial hemorrhage—an effort of unprecedented scope in the radiology community, RSNA officials say.

The challenge is being run on a platform provided by Kaggle, Inc. (a subsidiary of Alphabet, Inc., also the parent company of Google). Kaggle has recognized the RSNA Intracranial Hemorrhage Detection and Classification Challenge as a public good and will award $25,000 to the winning entries.

On September 3, the first wave of data was released to researchers who are working to develop and “train” algorithms. The training phase runs through November 4. During this phase, participants will use a training dataset that includes the radiologists’ labels to develop algorithms that replicate those annotations.

During the evaluation phase, from November 4 to November 11, participants will apply their algorithms to the testing portion of the dataset, which is provided to them with the annotations withheld. Their results will then be compared to the annotations on the testing dataset, and an evaluation metric will be applied to rate their accuracy and determine the winners.

Results will be announced in November and the top submissions will be recognized in the AI Showcase Theater during the RSNA annual meeting, which will take place from December 1-6 in Chicago.