President’s Corner #9: Let’s Talk a Little More About AI

3
4157

I concluded last week’s post by enumerating the following two distinguishing features of our Society’s relationship with images, in the context of the onrushing tsunami of reinvention which modern AI promises to be:

  1. We understand, control, and create the means by which image data are generated, and
  2. We are curators of the underlying biophysical information content of those data.

How might this expertise give us some creative leverage in an area that otherwise risks spinning rapidly out of our control? Here is one way. Everyone and her brother nowadays has been trying to modify learning algorithms to get smarter and smarter about images. Medical images in particular are taken as a given – as immutable patterns in a mosaic of clinical information available to be mined. But surely we know that images are anything but given. Isn’t changing the content of images the very core of our discipline?

Instead of asking what machines might be able to make of our images, why aren’t we asking how we can change our images in order for machines to make more of them? How might we change our image acquisition strategies, or even our scanners themselves, if we knew that the data would be fed into a modern neural network? Would we change our k-space trajectories? Our calibrations? Our image reconstructions? Would we even bother with images at all?

Let me trace out, for a moment, one speculative path that I think would represent a truly innovative approach to AI in MR. Our community has already led the way in embracing deep learning for image reconstruction. Rather than treating the complex nonlinear reconstructions so common in our field today as traditional optimization problems to be solved iteratively for each new dataset, some in our Society have noted that the transformation from raw data to images may be learned, then executed parsimoniously in future instances using a comparatively small number of neural network layers. Such transformations are a particularly good fit for AI, since, unlike for classification problems, training may be accomplished with comparatively few training data sets, with each image voxel serving as a distinct element of ground truth. Moreover, it has been shown that learned reconstructions can work still more robustly, and with still fewer training data sets, when the networks are constructed with knowledge of the MR forward problem, i.e., when the transformation is informed by known MR physics rather than operating purely blind to the details of data acquisition. (Score one for MR physicists.)

The advantages of using AI for image reconstruction, on the other hand, are legion: regularization parameters may all be learned, rather than being set to arbitrary thresholds; we can learn from learning systems, in the sense that inspection of learned network parameters can give insight into new classes of useful operations that we might not otherwise have discovered; more complex transformations can result in more natural image appearance, which is both familiar and pleasing to radiologists; robust reconstructions of undersampled data can enable higher imaging speed; and, speaking of speed, deep learning approaches can enable truly dramatic improvements in reconstruction speed, with all the hard work occurring in the offline training stages, rather than in real-time execution of the learned transformations. I see the advent of learned image reconstructions as a transition from mimicking our eyes to emulating our brains. Once we’ve learned, in infancy, what to make of the flood of sensory data washing over us, our brains process new streams of sensation on the fly, without iterating, and without looking back. To date, for reasons of computational tractability, learning approaches have been applied only to relatively simple image data streams, but one can easily imagine moving on to the multidimensional, multiparametric data that are the hallmark of modern MR.

If you are with me so far, here is the next logical question: if neural networks can robustly reconstruct undersampled image data, what sampling patterns should we use to yield the best downstream results? As a field, we have already gone through this conceptual exercise with previous algorithms for reconstructing undersampled data, such as parallel imaging or compressed sensing.  The advent of MR Fingerprinting has opened up the design space for acquisition still further. In an era of learned fingerprints, how much time should we spend on encoding each bit of information, whether macrostructural or microstructural or functional? Not only can AI change the balance here, but machine learning could also provide valuable tools for solving the underlying optimization problem.

But why stop there? If we are already committed to messing around with our acquisitions on this deep a level, we cannot escape the question of whether we might be able to do away with images altogether. If a machine of our designing acquires image data, another machine of our designing reconstructs those data, and yet another machine of our designing will soon be set loose to interpret the results, then do we really need the middle-man? Now, I do not mean to imply that images will have no more role in medical imaging in the future. Part of the essential value of images lies in the fact that they represent spatially localized information. Spatial maps are sensible means of representing the spatial organization of information, and our human brains also happen to be finely tuned to respond to such maps. Having images available for radiologists to check may never go out of style. However, spatial information will remain, even if we remove images themselves as an intermediary. Indeed, according to one admittedly cynical perspective, the process of image reconstruction may be viewed as an efficient generator of image artifacts. In reconstruction algorithms, after all, raw data, not output images, are treated as truth.

In a new world order with images as a byproduct rather than a goal, raw data could serve as the new archival format. With new parameter mapping approaches on the rise, moreover, those data could be made reproducibly quantitative. In such a setting, one could imagine performing clinical studies over millions of subjects, rather than the tens of subjects that typically appear in our imaging journals today. One could imagine setting AI loose to look for patterns in large populations – patterns that a single human brain, however expert at contemplating individual subjects, would be hard pressed to identify across vast cohorts of subjects. Meanwhile, those of us designing data acquisition strategies might begin to confront the interesting question of how much effort we should devote to spatial localization as opposed to tissue characterization.  And those of us interested in tissue biophysics would continue to be faced with the arguably even more tantalizing question of what uniquely new information about tissue we can generate to be mined.

These scenarios, of course, represent only one possible future of AI in imaging. Let us return now to the question of what unique skills and perspectives we in the ISMRM bring to AI. For all my enthusiasm about unconventional approaches, I am not suggesting that we cease all work in image classification and automated diagnosis. We are creative, and we can learn. We still should be driving the bus of computer-mediated interpretation, rather than letting it roll over us. And there are so many other areas of opportunity for the application of AI-related tools: management of imaging workflow, selection of appropriate imaging modalities and protocols for particular patients, automated image (or perhaps data) quality assessment, etc. Nevertheless, I believe we should give careful thought to the areas in which we are already equipped to lead.

Evidence from our Annual Meeting programs (see the table in my previous post) suggests that we, as a field, are responding rapidly to the AI revolution, quickly gathering in new tools and new insights as they arrive. Of course we are. That is what we do. In fact, rather than worrying whether the world is changing even faster than we can adapt, perhaps we should focus on a better measure than sheer speed. Perhaps we should consider not our pace, but our impact.

Modern imaging science IS information science. So let’s be true information scientists. Let us be the scientists who create new forms of information. In a world increasingly dominated by information, what more valuable contribution can we make?

Next time: The MR Value Proposition

3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Phil
7 years ago

This has been a great set of pieces Dan. In my own experience, working on large clinical trial data, we’re constantly faced with the great chasm between digital representation and clinical interpretation. For instance, our automated tools may generate a binary lesion mask but one almost needs a radiologist to interpret this mask. Simple volumetric properties have some value, but this is a poor substitute for a conventional clinical assessment. Whilst ISMRM is unlikely to make fundamental advances in computer vision (that can be left to Google), I hope the creativity of the community will push AI research towards richer… Read more »

Dan
7 years ago
Reply to  Phil

Thanks for your thoughtful comments, Phil. I love the idea of pushing “beyond the voxel” and “beyond the matrix”!

Zahra
7 years ago

Very interesting couple of pieces, and thank you! I, specially appreciate you ending on the “impact” note. While the number of abstracts submitted to recent meetings have increased, perhaps the quality and impact can receive more attention? We should think about advancing our knowledge of AI in this field. To take a page from Google or Amazon book, we must be application-driven; being scientists, we like to have the fun of exploring the unexplored, but we need to find translational topics with immediate application (in other words, be strategic) and the way to do that is to formulate an interdisciplinary… Read more »