Q&A with Jaime Mata and Nick Tustison

0
2258

By Maria Eugenia Caligiuri

Nick Tustison and Jaime Mata at the awards ceremony of the 2020 Zion Half-marathon, where Nick finished in first place and Jaime finished second in the overall Masters category.

Jaime Mata and Nick Tustison met 22 years ago, working side by side as master’s degree students, and they have been very good friends ever since. After taking different career paths for a few years, they were reunited as associate professors of radiology and medical imaging at the University of Virginia. In case you are wondering, yes, their research work is as inspiring as their lifelong friendship!

Here, we discuss their work entitled “Image- versus histogram-based considerations in semantic segmentation of pulmonary hyperpolarized gas images”, in which they demonstrated how deep learning outperformed traditional approaches to quantification strategies in lung imaging. We also talk about how they made the entire processing and evaluation framework available open source.

To discuss this Q&A, please visit our Discourse forum.

MRMH: Could you each tell us a little about yourself and your background?

Jaime: I grew up in Portugal, where I graduated in applied physics at the University of Lisbon. After that, I continued my master’s degree and PhD studies at the University of Virginia, where from the outset I was involved with MRI, developing pulse sequences in the university’s pioneering program on polarized gas MRI.  Currently, I am principal investigator in many clinical trials, including working towards obtaining FDA approval for these techniques, so that physicians will be able to prescribe MRI of the lungs. 

Nick: Like Jaime, I have an undergrad background in applied physics but with a computer science emphasis.  While he focuses on the acquisition side, my area is post-processing and image analysis. After my master’s degree, I went to Washington University in St. Louis, where I worked on applications of cardiac MRI. After that I did a post-doc with James Gee at the University of Pennsylvania, where I met my good friend and colleague Brian Avants, with whom I co-founded the Advanced Normalization Tools (ANTs) software. The algorithm proposed in this paper, El Bicho (“The Bug”), is now part of the ANTs deep learning library, called ANTsXNet.

MRMH: Can you explain the potentialities of hyperpolarized gas imaging of the lungs?

Jaime Mata is currently a Team USA Triathlon athlete and will be representing the USA in the World championship of Half-Ironman 70.3 distance.

Jaime: Traditional imaging of the lungs is mainly CT or nuclear medicine imaging. Both of these modalities deliver radiation, and, even at low doses, one is not supposed to have more than one exam every six months, unless it’s medically necessary. Furthermore, these clinical modalities do not provide good information about ventilation or physiology of the lungs. Furthermore, while traditional proton MRI has high resolution, it can’t get any ventilation or physiology information from the lungs, because…well, it’s mostly air! With hyperpolarized imaging, we can overcome these limitations and develop multiple applications, depending on the pulse sequences and the gases used. Twenty years ago, we started with Helium-3, but now we are using hyperpolarized Xenon-129, which dissolves into lung tissueand is then carried away attached to the red-blood-cells, so we can get MR images and other regional detailed information from multiple lung compartments (ventilation; diffusion into lung tissue; transfer to red-blood-cells/perfusion) . 

MRMH: How does it work?

Jaime: First, there’s no radiation involved in the process. The imaged subject inhales a gas that we polarize by changing the nuclear spin of the atom to the ½ state. In this way, the MR scanner can detect the signal, using appropriate coils tuned to the specific frequency of the gas. Depending on the pulse sequence used, we can then acquire ventilation images and see if the airways are blocked or narrowed by mucus or inflamation, stopping the gas from getting through. Since we can get multiple slice images with high resolution in a single short breath-hold of less than 10 seconds, we can clearly see which areas of the lung are working well or show some deficiency. Another application very popular in this field is achieved through dissolved-phase imaging: in this case, we exploit the way Xenon-129 dissolves into lung tissue, and reaches the red-blood-cells, in order to get physiological images and maps of the lungs. These techniques allows us to measure how much Xenon-129 is dissolved into the tissue, how much is binded to  the red-blood-cells, as well as the T2* of the gas in those compartments and the respective chemical shifts, among other physiological parameters. All these parameters help us in diagnosing, characterizing and evaluating different pulmonary diseases.

MRMH: That’s really impressive! How do you think all this might be translated into clinical practice?

Jaime: As researchers, we must remember that the goal of using new techniques in clinical research is really to help patients further down the line, in other words, to achieve earlier and better diagnoses, and therefore to improve their clinical outcomes. Since there’s no radiation involved in hyperpolarized gas MRI and no significant side effects, this makes the technique particularly good for monitoring longitudinal treatments in pediatric and adult populations. We are currently doing multiple longitudinal clinical trials, in which the baseline MRI can be followed by treatment every two or three weeks for some time. In this way, it is possible to see, within a relatively small timeframe, whether patients are responding to the treatment or not. In cases where treatment is not working, we can early on switch to a different treatment. I believe diagnostic tools like these will expand in a future with more and more individualized medicine available.  There is a  research community of about 30 centers worldwide now developing hyperpolarized MRI techniques, and once these are approved by the FDA for clinical use — hopefully in the next four-five months — I think the entire field will explode! This is one more reason why we developed this algorithm which is capable of processing and analyzing large amounts of lung ventilation MR images in an advanced, more precise, reproducible and autonomous way. 

MRMH: Speaking about the development of reproducible methodologies, can you tell us a bit more about your decision to make El Bicho open source?

Although Nick is with the Radiology department at the University of Virginia, he resides in Southern California where some of his favorite research spots are located.

Nick: When Brian and I started working on ANTs, we felt the need to facilitate its use for other researchers. At first, given the popularity of open-source toolkits such as R and Python, we developed two interfaces, ANTsR and ANTsPy respectively. Then, after the deep learning phenomenon hit, we started working on add-on packages called ANTsPyNet and ANTsRNet. In this setting, it was straightforward to continue following the open-source philosophy with El Bicho, too.

MRMH: El Bicho uses convolutional neural networks to segment hyperpolarized MR images of the lungs, outperforming the more conventional histogram-based techniques. How did you address the whole training and validation process? 

Nick: We have been working on deep learning applications for quite some time now, and as part of the development process we gained considerable experience with all the training and validation aspects. Last year, we demonstrated how our well-known cortical thickness pipeline works really well when adapted to a deep learning/convolutional neural network context. Thus, when El Bicho came along, we already had a pretty good understanding not only of the base network and the add-ons that we wanted to use, but also of how to get the most out of limited training data. 

MRMH: What was your dataset like? Is it publicly available as well?

Nick: In this particular instance, we only had 60 subjects. But based on existing literature and on our own experience, we had ways of expanding that core 60 subjects to get a really nice training set. Unfortunately, that particular training set is private. But luckily, Mu He, one of our co-authors who started out at Duke University, had made a dataset public, and we were therefore able to show that our algorithm generalizes well. 

MRMH: How could you possibly expand your training data? 

In his spare time, Jaime Mata likes to pilot airplanes in particular gliders.

Nick: Obviously, we’d like to have more training data from other sites, but the main message of our paper is “Hey, even though everyone has been using histogram-based algorithms for a long time, it’s really worth exploring the possibilities of deep learning.” We don’t want people to use our algorithm necessarily, but to look at deep learning as a developmental platform for pushing this field forward. 

Jaime: The kind of analysis that El Bicho does on ventilation imaging and segmentation has been a focus of study in our field for quite a while, like 15 years or so. Most research groups are very fond of histogram-based image segmentation, even though it’s manually oriented and time consuming. Thus, each group ends up with a very small number of normal subjects that conforms with the binomial distribution, but in the end does not really represent real-world data. Using deep learning might help to move away from that manual input, so that each group can grow their own normative dataset. It’s time to move away from those old algorithms and try something new, something reproducible that will allow every single site to analyze their data using identical algorithms, so that we can finally improve segmentation and quantification of lung MRI data.

MRMH: What are your next steps to promote this change?

Jaime: We have a consortium that acts as a forum for meetings and discussions with other investigators in our field, and also allows us to define standard protocols for image acquisition. Our goal is to get data from different institutions, acquired using different scanners and imaged from patients with different diseases, identify sources of variation, and train our algorithm to model them. And of course, at the same time, we also would like other groups to use deep learning algorithms  at their own sites, doesn’t really matter if those algorithms were based on our or a completely different software. Because the only way to really progress in a field is to leave your comfort zone, innovate create and move forward!