Deep Learning and Machine Intelligence

| March 12, 2017

Deep Learning

By Whitney L. Jackson CHICAGO—The overarching theme of RSNA 2016 has been deep learning and machine intelligence. Both are designed to help you with your workflow and ability to provide optimal patient care. But, questions still exist about what these tools are and how you can implement them.

To answer this question, Vlado Menkovski, a former research scientist with vendor Philips, discussed the differences between these two tools, highlighting how they can be used.

“This technology has provided breakthroughs,” he said. “It’s been exciting to see the potential impacts it’s had on imaging analysis.”

Machine Intelligence

Simply put, machine learning is akin to writing a computer program to address a known and well-understood problem. For example, scientists can understand the process needed to launch a satellite into space, he said, and they can easily write a program to make it happen.

Via machine learning, you can pick apart your data, learn from it, and use it to make predictions about your findings. For example, he said, you can use machine intelligence to create algorithms that predict cancer prognoses. You can program the algorithm to consider tumor size and other characteristic seen in an image to determine whether the patient has a poor prognosis.

Deep Learning

Overall, deep learning is a method for implementing machine intelligence. The main component is the artificial neural network, designed after the human brain. But, while the neurons of the human brain can fire and connect to each other in any way, the segments of the artificial neural network are connect in specific patterns and discrete layers.

Deep learning can rebuild images layer-by-layer, identifying edges, but as use of Big Data increases, you’ll be able to train computer models to do even more. Currently, though, these networks can already be trained to use coordinates for width and height or to segment pixels to identify different organs.

What One Company Offers

Some companies are already jumping into find the best ways to make deep learning and machine intelligence applicable in radiology. One company – Enlitic – has developed a lung nodule detector designed to reach positive predictive values 50 percent higher than those achievable by a radiologist. As the model analyzes images, it learns and, over time, can offer a probability score for malignancy.

The company is also investigating whether these tools can be used to identify wrist fractures.

According to company chief medical officer Igor Barani, MD, up to 40 percent of fractures are missed, leading to improper healing and pain. The model is being trained to find fractures on X-ray images, overlaying it on a heat map to highlight locations in a conventional PACS viewer. Radiologists are checking for accuracy, and results, so far, are positive.

Eventually, Barani said, Enlitic wants to expand its deep learning and machine intelligence capabilities to CT and MRI scans for a wider variety of medical conditions, incorporating the ACR guidelines along the way. The end-goal, he said, is to build a neural network that uses genomic, clinical, and imaging data factors to evaluate the entire human body and detect pathological states and deviations from normal anatomy.

Much work still needs to be done, and the industry needs to determine how best these tools can be used to augment the services you and your colleagues provide. Deep learning and machine intelligence will be best used, Barani said, when radiologists better understand what these technologies can and cannot do.

“Half the battle has to do with expectation management,” he said. “You have to avoid the hype about deep learning and machine intelligence. It’s very important to help people understand the problems it can help solve and which it can’t.”

Category: Uncategorized

Comments are closed.