Q&A with Frank Ong and Miki Lustig


By Nikola Stikov

Revolving doors: Frank (now at Stanford), wearing Berkeley regalia and Miki (now at Berkeley) wearing Stanford regalia.

This MRM Highlights Pick interview is with Frank Ong and Miki Lustig, researchers at Stanford and Berkeley, respectively. Their paper is entitled “Extreme MRI: Large‐scale volumetric dynamic imaging from continuous non‐gated acquisitions” and it was chosen as this month’s Reproducible Research pick because, in addition to sharing their code, the authors also shared a demo of their work in a Google Colab notebook.


MRMH: So, Frank, how did you get involved with MRI, and how did you end up working with Miki?

Frank: I started doing work in MRI when I was an undergrad at Berkeley. Miki was teaching a digital signal processing (DSP) class, where he would introduce a lot of examples using MRI, and I found the close relationship between MRI machines and the Fourier transform really amazing. I reached out to Miki in 2011 and asked if I could do some research with him, and that decision basically led to my career. Currently, I’m at Stanford working with Shreyas Vasanawala and John Pauly. 

Miki: I actually remember things a little differently. I well remember teaching that undergraduate DSP class, and also that there was this one person in that class that was just nailing every exam and every assignment, and that was Frank. At some point I decided to write him an email to ask if he was interested in doing research. He expressed an interest, and so we set up a meeting with Martin Uecker, who was working with me at the time. We discussed possibly doing a combined flow and Bloch simulations. The next meeting, Frank actually showed up with the simulations for the entire project! One thing led to another, and he ended up having a paper by the end of his undergraduate studies. So obviously, I was pretty keen to have him as a graduate student. Frank has been an amazing graduate, with his PhD work resulting in this paper. Frank is now getting ready to go to the job market — he would be an incredible faculty. So, if any of the readers are on the recruiting committee in their institution…. They should be carefully paying attention!

MRMH: Miki, how did you get into MRI?

Miki: Well, when I came to Stanford, I didn’t know what I wanted to do, since electrical engineering is such a vast area. I ended up taking a class with Dwight Nishimura and learned about MRI, and my reaction was: this technology is really unbelievable! Around that time, I also started working as a TA for John Pauly, and that led to me working with him in MRI. Every time I think about MRI, or teach it, I find the topic incredible. I cannot believe nature has given us this! It’s such an exciting and broad topic. And within this field you can do so many things, such as signal processing, computation, hardware, biology, physics, all the while staying within a single community. I think that’s what really is remarkable about MRI.

To do extreme research, you need extreme(ly good) collaborators!

MRMH: Moving on to “Extreme MRI”, what’s extreme about it, Frank?

Frank: It’s extreme in the sense that we’re trying to reconstruct a large-scale dynamic image dataset while pushing reconstruction resolution to the limit. Just to give you a sense of the problem at hand: the image datasets I’m talking about would be in the order of 100 gigabytes if we didn’t do anything special with them. So, this work allows us to have a compression factor in the order of 100, albeit using a lossy compression. What’s extreme is both the reconstruction challenge and also the memory and computation requirements. We wanted to see whether we could actually reconstruct an ungated dynamic acquisition at these high resolutions, because the underlying dynamics are not periodic or repeatable.  

MRMH: Miki, care to add to that?

Miki: I think Frank is understating what’s extreme about this work. I’ve always been interested in the whole idea of compressive sensing to speed up acquisitions. But the issue here is with dynamic imaging. While you can do 2D dynamic MR imaging with no problem, 3D has always been extremely difficult. And when you start doing constrained reconstruction with 3D dynamics, then you’re limited not just by the algorithms or models, but also by the ability to actually compute and store the data on computers. So, there’s a lot of thought that needs to go into doing that. And Frank came up with this amazing model making it possible not only to represent the data compactly, but also to store it compactly. This model lets you reconstruct really large data sets with amazing temporal dynamics. When he showed the whole body of a baby imaged using DCE-MRI, I was blown away by it.

MRMH: So, Miki has a long history of sharing code and making it easy to run, starting with his compressed sensing work. What did you decide to share for this work? And why did you decide to share it in the way you did?

Frank: Yeah, there’s definitely a culture in his lab to share code and make sure almost everything is reproducible, which is partly a result of Martin Uecker’s presence in the lab. Therefore, it was perfectly natural for me to decide to upload all my code and almost everything that I could share. I used Google Colab to host these demo Jupyter notebooks, and the reason I chose this service is because they provide GPUs for free (up to an allocated computation time limit). For the dataset, I used Zenodo, which now allows you to upload unlimited amounts of data.  

Former and current group members meeting up at the Montreal ISMRM.

MRMH: Miki, why is open science an important part of your lab culture?

Miki: Because, in my view, it’s a key component to producing reproducible research. You can claim something in your work and show it in a paper, but if nobody can reproduce what you’re doing, then what’s the point? I think that ensuring the proliferation of the method you’re developing is really important. Even if you have a good method, if it’s too complex to do or you’re expecting other people to reimplement it from scratch, that will discourage others from using your work. In short, giving them the ability to do exactly what you did in your paper is critical for the proliferation of your work. And so, in the process of doing this paper, Frank also created this Python package called SigPy, and I am very happy to see it being used by a lot of people and institutions. 

Nikola: Are there any clinics doing “Extreme MRI” at the moment?

Frank: A less extreme version is actually currently in operation at the Stanford Children’s Hospital, where I know it is being used for some of the DCE scans. It’s at a lower resolution than what we’ve shown in our paper, and the reconstruction time is therefore in the order of 15 minutes with four GPUs. But there’s still a long way to go before we can deploy in a clinical setting exactly what I’ve described in the paper, because the reconstruction would take hours.

MRMH: To finish off, I wanted to touch on one aspect of Miki’s PhD, which was on compressed sensing. At some point during our PhDs at Stanford, we were at a cafe with some other folks and we were talking about how many papers we’d written so far. At the time you were 3 or 4 years into your PhD and you still didn’t have a single paper out. Then you got the compressed sensing paper published, and it’s now the most cited paper in MRM. Why did you wait so long to publish? What made you wait, and how did the whole compressed sensing story come together?

Miki: Well, I started the work around 2003 or 2004, working with Dave Donoho, Jin-Hyung Lee and John Pauly. And after a short little while, we actually had some pretty good results. Yet, even though we’d been presenting this work in conferences, I found it pretty hard to narrow down the narrative of the paper. The compressed sensing theory was developed first by mathematicians, and so I think it was simply that it took us some time to work out how to explain it in a way that would allow any MRI layperson to read the paper and grasp what’s actually going on. And I think that has been our main contribution, explaining this work really well and also providing the code to reproduce it. And all that took a considerable amount of time, and a lot of iterations.

Inline Feedbacks
View all comments