Medical image acquisition has improved substantially over recent years, with devices acquiring data at faster rates and increased resolution. The image interpretation process, however, has only recently begun to benefit from computer technology. Most interpretations of medical images are performed by radiologists; however, image interpretation by humans is limited due to large variations across interpreters and fatigue. The Radiologist main tasks include an initial search process to detect abnormalities, segmentation to quantify measurements and characterization of findings into categories such as benign vs malignant.
In this talk I will give an overview of the deep learning computer-aided detection and diagnosis tools we are developing, which can support the detection, segmentation and the characterization tasks. Examples will be presented in Chest Xray, CT liver ,and MRI brain analysis. Obtaining large-scale annotated datasets is a key challenge in the medical domain. I will present novel methods we are developing to solve these data challenges. I will conclude with an overview of possible translations of these tools towards augmented radiology reports, and more efficient radiologist workflows.