By Jessica McKay-Nault
If you don’t work in the machine learning space, you might be surprised to discover that most convolutional neural networks (CNNs) split data into real and imaginary channels, ignoring the underlying structure of complex-valued data altogether. Recently, researchers, including Elizabeth Cole and Shreyas Vasanawala et al. at Stanford University, have started asking whether complex-valued CNNs would perform better. In their paper “Analysis of deep complex-valued convolutional neural networks for MRI reconstruction and phase-focused applications”, which is this month’s Highlights pick, they go a step further, seeking to evaluate the effect of complex-valued CNNs on some clinical MRI applications that rely heavily on phase. They were surprised to find that using a complex-valued CNN made a big difference in the measurement of peak velocity of blood flow in the heart. Their paper was chosen as this month’s pick because of their exemplary reproducible research practices; for example, they shared their code on GitHub, included a requirements file of their dependencies, and provided details on how to train/test their deep learning models.
To discuss this Q&A, please visit our Discourse forum.
MRMH: First, why don’t you tell me a little bit about your background and how you ended up in MRI research?
Elizabeth: Sure! My background is in electrical engineering and signal processing. I joined the lab not knowing anything at all about MRI – so that was quite a learning curve! I got into MRI both because my background was in signal processing, and because I wanted to go into machine learning. So, it was kind of the perfect combination. Currently, I am going into the fifth year of my PhD, and I’ve been working a lot on machine learning, primarily in reconstruction.
Shreyas: I studied mathematics as an undergrad and then pursued an MD/PhD at Stanford. I had the good fortune of taking a class from Dwight Nishimura on MRI, and wound up joining his group. While training in radiology and then specializing in pediatric radiology, I kept up with some MRI research. Since 2007, I’ve been practicing pediatric radiology and working on developing new approaches to pediatric MRI.
MRMH: Before we get into the details, how would you describe a convolutional neural network (CNN) to the average person? Or to a scientist who isn’t familiar with deep learning?
Elizabeth: I would say that a CNN is an algorithm that can take a data set, typically images, and assign importance to certain aspects of it, and also differentiate one image from another. To a scientist, I would say that CNNs are a way to model complex mappings and functions for a given task without having to explicitly define those complex functions yourself. And by complex, I mean complicated, not complex valued; we should probably differentiate!
MRMH: Why do most deep learning applications and MRI only consider the real values?
Elizabeth: I also wondered this at the start of the project. And the answer is actually simple: it is because deep learning platforms only provide real-valued capability in the most common building blocks of CNNs. If you try to run complex-valued data through a typical CNN in something like TensorFlow, you’re going to get an error because it hasn’t been built into those platforms. I think that’s a big obstacle for people wanting to use complex-valued networks.
MRMH: How could we solve that problem as a research community?
Elizabeth: One avenue would be where my work came in; indeed, the code repository I published was not just for reproducibility purposes, but also to surmount this issue. If you’re a scientist who has some complex-valued data, you’ll find that, unfortunately, TensorFlow and pytorch don’t support your data. So, you can use my function!
In terms of getting TensorFlow or pytorch to implement complex-valued CNN building blocks, I think you would need to have enough people coming forward, and enough papers showing them that “Hey, this is needed!” To give an explanation as to why this hasn’t been provided already, I would say that the deep learning community was based on typical RGB images, like photographs of people or landscapes, that are not complex valued. Using complex-valued medical images is a cultural shift and an application shift that will simply need to happen.
MRMH: You know, it’s funny, but even though I’m acutely aware that MRI uses complex-valued data, it hadn’t occurred to me that deep learning in MRI is done on real-valued data. I was so surprised on reading your paper; I had never even thought about it.
Elizabeth: I think I was the opposite. Coming into the PhD I didn’t know anything about medical imaging, so I thought “Wait, this is complex valued!?” It really depends on which side you’re coming from.
MRMH: [Laughs] You look at your MRI data and say, “What do I do with these i’s?!”
Are there any other challenges in using complex values?
Elizabeth: No, I wouldn’t say so. It’s mostly just a practical problem that the platform simply doesn’t support.
MRMH: In research, there are so many practical obstacles that you have to overcome that aren’t scientifically interesting, or even aspects you would write about in a grant application, and yet you still have to get over those hurdles.
Elizabeth: Exactly. I’ve learned that that’s pretty much what research is — trial and error of the silliest things that can make or break a project. You have to persevere, though. Especially, DICOMs… don’t even get me started on writing DICOMS! I’m at the point that if Shreyas mentions DICOMs around me, I’m just gonna leave! [laughs]
Shreyas: We certainly don’t want that!
MRMH: [laughs] Yes, I feel your pain!
Elizabeth: Another practical question is whether you have access to the complex-valued data. It can be hard for some researchers to find MRI datasets, or often they only have magnitude image data sets where you don’t have k-space or the complex-valued data. In that case, you have to use a real-valued network.
MRMH: It sounds like we may need to rethink data sharing and put more effort into saving and sharing the complex-valued data.
Elizabeth: It would be ideal to have easy access to the raw k-space data; you would then be able to do so much more with that data. We actually have a site from our lab called MRIdata.org, where we upload different raw k-space data sets to try to make them more public and accessible to people whose universities aren’t linked with a hospital.
Shreyas: It has helped other researchers across the world who don’t have access to clinical patients or even scanners.
MRMH: That’s cool! Okay, going back to deep learning…What is an activation function and why is it important?
Elizabeth: An activation function is a common part of your standard CNN. It’s essentially a function that you define and apply to the data in the middle of your neural network. Activation functions are important because they can introduce nonlinearities into the network, which is how CNNs are able to model more complicated functions. Without introducing nonlinearities, you might end up with a linear function as your network, which is not going to be able to model everything that you want it to.
MRMH: How do you make it complex-valued?
Elizabeth: That’s a good question. There are many ways. With ReLu, for example, you could apply ReLu to the magnitude but keep the phase the same. Or vice versa. I tested several different complex-valued activation functions and found that the best was C-ReLu, which is a complex-valued version of ReLu. Since ReLu is pretty much the gold standard it makes sense that something similar would work for complex-valued networks, but I don’t have a more intuitive understanding than that.
MRMH: This is one of the things that I always struggle with in deep learning. As you said, ReLu is the gold standard, but why? What about it makes it the gold standard?
Elizabeth: I think that’s also an issue in the deep learning community. Because of the nature of what we’re doing, it is like a black box sometimes. It can be hard for people to explain their results.
MRMH: On the bright side, though, I’ve learned a lot here today! Well, it was great to get to chat with you. And I guess I will be seeing around Stanford!