Q&A with Moritz Blumenthal and Martin Uecker

0
1252

By Mathieu Boudreau

Joint research group photo of the labs in Graz and Göttingen at this year’s ISMRM annual meeting in Toronto. Moritz is 4th from the right in the back row, and Martin is third from the left in the front row.

Our latest MRM Highlights Pick interview is with Moritz Blumenthal and Martin Uecker, researchers at University Medical Center Göttingen in Göttingen, Germany and Graz University of Technology. Their paper entitled “Deep, deep learning with BART” was chosen as this month’s Highlights pick because they shared code capable of reproducing the figures from the publication, and also integrated it in another open-source tool (BART).

MRMH: Please tell us about yourselves and your backgrounds.

Moritz: My background is in physics. I have an undergraduate degree in this field and then did a Master’s degree as well, researching a topic that lies at the border between cosmology and particle physics. Afterwards I felt that if I were to do a PhD, I wanted to work on a more applied topic. Having a good background in maths, I looked into some deep learning projects, and just by coincidence one of my friends visited Martin’s lab and found that he was about to start a big data project. One thing led to another and I decided to do my PhD with Martin, working with deep learning and MRI.

Martin Uecker: My story is very similar. I also studied physics and mathematics at the University of Göttingen, and my diploma thesis focused on theoretical physics. I, too, was first introduced to MRI by a friend, who in my case happened to be Tobias Block. He told me something about MRI, which I thought sounded very interesting. I applied to do a PhD in the lab of Jens Frahm and there worked on parallel imaging and image reconstruction mostly, and that’s how I first got into the field.

MRMH:  Before jumping into the paper, could you explain what BART is and tell us a little bit of the history behind it?

Martin Uecker: BART stands for Berkeley Advanced Reconstruction Toolbox, and it was a project I started in Miki Lustig’s lab where I did a postdoc. When I moved to Miki’s lab, I said I’ll come, but I want to make all my source code public because I wasn’t allowed to do that with my previous work. Miki agreed, and that’s how open BART was started. It took a while to get to the first release, which was at the Sedona meeting in 2013, and at that time it was just a zip file or tarball. Back then, sharing code was still a new thing for the community. We presented in a software demo session there, and then people started to actually use it. This was important as it motivated us to develop faster and better reconstruction techniques for compressed sensing. And then the rest is history. It got ever more famous as more people used it, and we extended it repeatedly, every time we published new things.

MRMH:  So, congrats on BART’s 10th birthday! Could you now give us an overview of your article?

Moritz: Basically, the article stemmed from the idea behind my PhD project. Nearly everybody does deep learning nowadays, and although BART offered some initial support for this (e.g., automatic differentiation), many users depend on other deep-learning packages not specialized for MRI. Our motivation here was to introduce everything we needed to do deep learning in BART ourselves, partly to ensure reproducibility of the image reconstruction pipelines designed with deep learning. We knew from our own experiences that reproducing other people’s open-source deep learning reconstruction pipelines is challenging when using tools such as TensorFlow that get updated often, and certain versions may only be compatible with other software dependencies. All this means you have to recompile software and it can get fairly messy. We wanted to have this under our control. The paper describes how we optimized the necessary numerical methods added to BART to do deep learning. In particular, we describe how we implemented automatic differentiation for complex numbers. The rest of the paper shows some example implementations and reproductions of other published work.

Martin: I think Moritz explained our rationale very well. Personally, I found that, ultimately, if you maintain software, every dependency you have can become a problem. So having everything under control ensures that the tools needed will be compatible in the long run. 

MRMH:  Did you encounter any unexpected challenges during the development of this framework?

Moritz: Not unexpected so much, but quite a lot of work was needed to make it fast. It was one of the major problems we had to solve, and it was a challenge.

MRMH:  Martin, were you surprised by any of the results of Moritz’s work?

Martin: Yes – I was very impressed in the end, because I’d only expected him to implement a little bit of deep learning, but he actually went far beyond that, implementing a complete framework. It was so much more than I’d expected originally, so I was surprised and very pleased.

MRMH: Do you have any practical advice for people who might want to start using this framework?

Moritz: We made a webinar for one of the recent ISMRM conferences, and made both the recording and an interactive notebook for Google Colab available to everyone (here). I think this would be a great place to start, because it’s a complete end-to-end example of deep learning in BART, and includes the necessary data in the MRD format.  If I had a new student fresh from a Bachelor or Master’s degree, I would start by giving them this notebook. We also shared scripts with our paper on GitHub, but in a less beginner-friendly format, so the webinar repository is definitely the best place to start for beginners.

MRMLH:  Could you share your thoughts on how to make research more reproducible, in particular on the computational side with software like BART? 

Martin: In general, I think there are easy ways to make things reproducible, and also harder ways. Somehow, we always pick the harder ways I think [laughs]. One way people do it is by basically “freezing” the software environment at the time of publication, and often that allows others to reproduce your results. And that’s a good strategy for some people, but it’s not what we aim for with BART, which is being fully backward compatible. The goal is that anybody can directly start to continue working on a topic with the most recent BART version when starting from one of our papers. So, we need to integrate the innovations for new papers into the BART framework itself, and then keep it reproducible so that we can actually reproduce the results of all our previous papers as well, which can be very challenging even with tools like unit testing. 

Moritz: Another problem with research in anything dealing with computation, in particular deep learning, is that it involves a very long chain of float point operations. And because of this, it can be sensitive to small things, even which numbers we sum together first. Although this may cause an only tiny numerical difference initially, that can propagate through the remaining operations and get larger. Therefore, although your end results should be good quality, you won’t have *exactly* the same number as before. Switching computers and running the same code can also result in this type of small but non-zero numerical difference. It can be hard to be 100% reproducible when dealing with computations.

Martin: I think that in the end, providing you did everything right, you should get similar results even when switching computers. If not, your method was likely not very stable to begin with. 

MRMH: And to end off, could you tell us what you enjoy doing when you’re not in the lab?

Moritz: One of my hobbies is badminton, which I play in my spare time. I was a badminton trainer for four years, and have organized tournaments and other events. I’m new to Graz, and recently found a club here. It’s a great sport to play if you want to meet people after moving to a new city. 

Martin: As for me, unfortunately I don’t have much spare time since becoming a professor. What little free time I do have I like to spend with my family.