By Mathieu Boudreau
The August 2021 MRM Highlights Reproducible Research Insights interview is with Elizabeth Cole from Stanford University. Her paper is entitled “Analysis of deep complex-valued convolutional neural networks for MRI reconstruction and phase-focused applications”. The paper was picked out because of the exemplary reproducible research practices implemented by the authors, such as sharing code on GitHub, including a requirements file of the code dependencies, and providing details on how to train/test their deep learning models. To learn more about Elizabeth and her lab’s research, check out our recent interview with her and Shreyas Vasanawala.
To discuss this blog post, please visit our Discourse forum.
General questions
1. Why did you choose to share your code/data?
So that other people might implement and experiment with complex-valued networks for MRI reconstruction and even fields outside of MRI.
2. What is your lab or institutional policy on code/data sharing?
Our lab’s policy is totally open to code/data sharing; we even have a website called mridata.org where we upload publicly available MRI datasets. However, we have to be careful to fully anonymize the datasets. Also, if a project is being done in collaboration with a company, we need to make sure the company approves the sharing of code/data related to that project.
3. At what stage did you decide to share your code/data? Is there anything you wish you had known or done sooner?
At the stage of finishing the first draft of the manuscript, I decided to share our code publicly for the benefit of both reviewers and readers.
4. How do you think we might encourage researchers in the MRI community to contribute more open-source code along with their research papers?
I think that MRI challenges where code is a submission requirement would encourage researchers in MRI to contribute more open-source code.
Questions about the specific reproducible research habit
1. What advice do you have for people who would like to start using requirements files for their Python projects?
Make sure you know what versions of software you are using, e.g., TensorFlow 1.14 versus TensorFlow 2.1, in order to generate an accurate requirements file. Additionally, make sure everything in your requirements file really is a necessity for a user to run your Python project, so that time isn’t spent installing and debugging unnecessary requirements.
2. What questions did you ask yourselves while you were developing the deep learning code that would eventually be shared?
Some questions I asked myself while developing this code were:
- What’s a readable and concise way of commenting our code?
- How can I make things modular?
- What should I put in the ReadMe file?
- How can I make sure things make sense to the average person instead of just to me?
3. What considerations went into ensuring that this software can be widely used and maintained in the long term?
A couple of considerations were to make sure the website hosting the data is kept up to date and hosted on our servers, and allowing users to create GitHub issues and pull requests.
4. Are there any other reproducible research habits that you haven’t used for this paper but might be interested in trying in the future?
I hear that using a platform called Weights & Biases (wandb.ai) is really helpful for viewing results and for dataset versioning, which could be beneficial in terms of making reproducible research a regular habit. I’m interested in trying this platform in the future.