By Mathieu Boudreau
This month’s MRM Editor’s Pick is from Fang Liu and Richard Kijowski, researchers at the University of Wisconsin-Madison. Their paper presents a novel approach of automatically segmenting knee joint structures, combining the power of recently developed deep learning techniques with a 3D deformable model approach. We recently spoke with Fang and Rick about their current and upcoming projects.
MRMH: Please tell us a little bit about yourselves.
Fang: I did my undergrad in China in biomedical engineering. I then moved to London Ontario (Canada), where I did my master’s at Western University, studying dynamic contrast enhancing MRI techniques for breast imaging. After that, I moved to Wisconsin for my PhD in the medical physics department. My research was primarily focused on musculoskeletal (MSK) imaging using rapid quantitative and morphological MR methods for assessing cartilage, meniscus and bone in MSK diseases like osteoarthritis. Recently, I started quite a few deep learning projects for MR image acquisition, reconstruction, and analysis. So I also call myself a deep learning and artificial intelligence researcher nowadays.
Rick: I am a MSK radiologist here at University of Wisconsin-Madison. I have been here since completing my fellowship in 2003. My main area of research is in the use of MRI to investigate all types of MSK diseases, with a specific focus on speeding up quantitative and morphologic MRI of cartilage and other joint structures involved in osteoarthritis.
MRMH: Could you give us a brief overview of the work you did in this paper?
Fang: The idea here is that we adapted a very interesting deep learning technique, called convolutional encoder-decoder (CED) network, one type of highly efficient semantic segmentation convolutional neural network (CNN), to extract 3D knee joint image segmentations in an extremely efficient manner. One challenge we faced was on how to fine-tune the results from CNNs in 3D MR image space, because the CNN technique is processed using 2D images, slice by slice. We proposed using a 3D surface-mesh-based modelling technique, the 3D simplex deformable model, to regularize the spatial 3D information from the output of the CED. A nice thing about combining these two techniques is that both methods are highly efficient, providing us with an accurate and time efficient knee joint segmentation tool.
Rick: I think it is also really important to give credit to what has been done before and to give previous researchers their due. Researchers have developed various semi-automated and fully-automated methods for knee joint segmentation over the past 10 years, since the demand has been so high. There are some really good semi-automated and fully-automated methods out there, which are commercially available and have been used in large clinical studies. They do have a few drawbacks however. They work great, but can be time consuming and have high computational costs. So, the work we presented here is a new alternative deep learning method, which can perform rapid segmentation of not only cartilage and bone but all MSK tissues within the knee joint, which we demonstrated in another recently published MRM paper from our group.
MRMH: How does this work fit in your broader research goals moving forward?
Rick: This work basically serves as step one in our broad design to use deep learning as a diagnostic and predictive tool for medical image analysis. It seems reasonable that the first step in detecting a disease is to segment out the tissues where the disease is located. For example, to detect cartilage lesions, we feel that it is best to first segment the cartilage, and then use a second classification system in order to predict the likelihood that the segmented tissue is normal or abnormal. Our overall goal is to apply deep learning technology in clinical practice to improve diagnosis of MSK diseases.
MRMH: Do you have any advice for researchers that may want to do similar work?
Fang: I think that there are at least three essential components to consider before starting deep learning projects. You need to understand the fundamental concepts of deep learning, elements including convolutional layers, pooling process, normalization layers etc., choose the tools you feel comfortable starting with (for example, for deep learning there are various tools and libraries to use, such as TensorFlow, Theano, or PyTorch), and, most important, identify a research problem that has strong clinical value and you think might fit well into the deep learning scope. Those are all really important to consider when starting a deep learning project for medical imaging applications. Also, it might be helpful to have a group of researchers, from both clinical and technical sides to brainstorm ideas and thoughts.
MRMH: Are you considering making your code publicly available?
Fang: We are actually working on a code package. We’re hoping to upload our code somewhere online very soon, maybe on GitHub or Sourceforge, to help the MR research community.
MRMH: Is there anything in particular you enjoy doing when you’re not in the lab in Wisconsin?
Fang: There’s lots of fun stuff to do here. During the summer you can enjoy beautiful lakes and mountains, you can go hiking, go camping with your family. It’s a fun place to stay.
Rick: In the fall, the biggest things in Wisconsin is definitely the football team, the beer, and the bratwursts.