President’s Corner #12: AI in Asilomar: Notes from the ISMRM Workshop on Machine Learning


AI in Asilomar: Notes from the ISMRM Workshop on Machine Learning

A brisk mid-March morning, not long ago: A figure, modest of stature, nametag and white lanyard fluttering in a coastal breeze, stands beside a façade of weathered wood and stone. Through tall, many-paned windows, we see the interior of a meeting hall, until recently buzzing with lively conversation, now empty and prone to echoes. Below, meandering boardwalks traverse windswept dunes, leading to a gently curving shoreline.

Greetings from your man on the street at the ISMRM Workshop on Machine Learning, just concluded at the Asilomar Conference Grounds in Pacific Grove, California. Not long ago, capacious Merrill Hall here was packed to the rafters with colleagues from around the world, taking stock of a new era of machine learning that is poised to transform the field of magnetic resonance. Are you curious about what went on? Was the timing not quite right or the flight just a bit too far for you to join? Never fear. Here is a subjective selection of highlights from the dense two-and-a-half-day program:

Day 1

A guest arriving in Asilomar doesn’t know whether to gasp or to sigh: striking ocean vistas vie for attention with calming arboreal scenes around the historic conference grounds. It is the perfect setting in which to explore intersections between hard-won, time-tested knowledge and bold new ideas. The last time I was in Asilomar was 1997, for the ISMRM Workshop on Rapid Imaging. How times, and fields, change…

Organizing committee chair Greg Zaharchuk @ Stanford opens Session 1 – an overview of machine learning methods – with a warm welcome.

The #MachineLearningISMRM workshop by the numbers: 268 attendees from 22 countries (with only ~3 months to prepare!)

The history of machine learning, as presented by Kyunghyun Cho @NYU, is a history of fits, starts, and cycles – a “backpropagation to backpropagation.” No monolithic discovery, but a series of incremental improvements and reinventions, eventually reaching critical mass.

Taesung Park @Berkeley describes a recent addition to the machine learning toolkit, CycleGANs, which use the power of self-consistency for unsupervised image-to-image translation. Apple ⇒ Orange ⇒ Apple. Horse ⇒ Zebra. MR ⇒ CT. Horseback Putin ⇒ Zebra-Centaur Putin? OK, there is always room for improvement…

Session 2 on image reconstruction, introduced by Florian Knoll @ NYU: Can all the niceties of modern image reconstruction be learned by a neural net? Absolutely. Let me count the ways…

Why not learn brand new MR pulse sequences as well? Bo Zhu @MGH shows we can, with an approach called AUTOSEQ. Next step: imaging in highly inhomogeneous environments?

In Session 3 on post-processing methods, David Zeng @Stanford presents Off-ResNet, which enables accelerated imaging by correcting for non-stationary off-resonance artifacts in long readouts. Love the name (AI nerd pun alert!), and the results.

Power Pitches for over eighty posters, previewed in one minute each in Sessions 3 and 4, reveal a multitude of applications of machine learning covering the entire MR data lifecycle, from acquisition to reconstruction to analysis to interpretation. Some presentations even combine multiple stages, e.g. reconstruction + segmentation all in one.

Day 2

Curtis Langlotz @Stanford delivers the Keynote of Session 5 on clinical application of machine learning. Curtis on the future of radiology: AI won’t replace radiologists. Radiologists who use AI will replace radiologists who don’t.

Michael Muelly @Google, speaking on the proper preparation and labelling of data for deep learning, quotes John von Neumann on the dangers of overfitting: “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.”

Paul Chang @University of Chicago speaks from the heart to MR scientists interested in machine learning. “I hate you all!” says Paul. (As a beleaguered informatics/IT person, that is.) He says “Hitch your wagon to problems we care about in hospital IT.” We hate you back, Paul, and look forward to working together.

Session 6 on neuro applications: brain tumor, stroke, Parkinson’s disease, fetal alcohol syndrome, dementia, and brain hemorrhage. Quite a litany of debilitating diseases. Tell me – can a machine cry? Perhaps not, but it can certainly predict…

Session 7: Laptops out, for the hands-on machine learning tutorial at #MachineLearningISMRM. Look, Ma, ISMRM is on AWS (ably guided by Peter Chang @ UCSF).

Have you ever secretly wished someone could just explain why and how Deep Learning works? Naftali Tishby @ Hebrew University does just that, in the evening keynote, combining information theory with statistical physics. He shows that there are two distinct phases of neural net learning: a drift phase of learning relevant features, then a slower/harder diffusion phase of FORGETTING irrelevant features. Yes, it would seem that the hardest part of learning is forgetting!

Day 3

Session 8: Artificial intelligence is not just a brain thing. Non-neuro areas surveyed include prostate, breast, heart, and blood vessels.

Lectures on Team Science (Rod Pettigrew @ Texas A&M and Houston Methodist) and Regulatory Strategies (Berkman Sahiner @ FDA) round out Session 9 on broader issues in machine learning for MRI.

A few headlines and collective conclusions from the concluding discussion:

  • Wanted: More data! (What would it take to create MR-Net in the image of ImageNet?)
  • Wanted: More labels (and tools to make labeling easy)!
  • Wanted: Better metrics (of image quality, diagnostic efficacy, etc.) and confidence intervals!
  • An admonition: When selecting applications of AI/ML, pay attention to clinical pain points!
  • A challenge to challenge ourselves: Look for the MR Reconstruction Challenge Redux, or Grand Challenges for AI in MR, in times and conferences to come…

Leading the way: #MachineLearningISMRM workshop poster and presentation award winners. Best presentation: First place Okai Addy @ HeartVista (taking home a brand new GPU), second place Bo Zhu, honorable mention David Zeng and Anne Nielsen. Best poster: First place Jonathan Tamir @ Berkeley, honorable mention Ken Chang and Chris Sandino.

Sincere thanks and enthusiastic kudos to the workshop organizing committee, and its indefatigable chair, Greg, for a rich, engaging, and thought-provoking workshop – the first-ever late-breaking ISMRM workshop (organized with minimal lead time on a topic of driving interest and importance). Watch ISMRM be nimble in a changing world!

Farewell, #MachineLearningISMRM and Asilomar. To be continued in Washington, DC this fall: look for announcements at

And don’t miss the latest advances at our Annual Meeting in June. #AI-in-Paris. See you there.

Next time: The Mentor Effect