Return to page

BLOG

Interpretability: The missing link between machine learning, healthcare, and the FDA?

 headshot

By H2O.ai Team | minute read | August 23, 2018

Blog decorative banner image

Recent advances enable practitioners to break open machine learning’s “black box”.

From machine learning algorithms  guiding analytical tests in drug manufacture, to predictive models recommending courses of treatment, to sophisticated software that can read images better than doctors, machine learning has promised a new world of healthcare where algorithms can assist, or even outperform, professionals in consistency and accuracy, saving money and avoiding potentially life-threatening mistakes. But what if your doctor told you that you were sick but could not tell you why? Imagine a hospital that hospitalized and discharged patients but was unable to provide specific justification for these decisions. For decades, this was a roadblock for the adoption of machine learning algorithms in healthcare: they could make data-driven decisions that helped practitioners, payers, and patients, but they couldn’t tell users why those decisions were made.

Today, recent advances in machine learning research and implementation may have cracked open the black box of algorithmic decision making. A flurry of research into interpretation, or “the ability to explain or to present in understandable terms to a human ,” has resulted in a growing body of credible literature and tools for accurate models with interpretable inner-workings , accountability and fairness of algorithmic decision-making , and post-hoc explanation of complex model  predictions . Can this research really be applied to healthcare, and if so, where would it be most immediately impactful? Three suggestions and an example use case are put forward below.

Three Hurdles to Black Box Algorithms

FDA and drug development

The FDA has notoriously stringent requirements for the approval of new drugs. This could pose a challenge to drug companies experimenting with machine learning to enforce quality control and even to analyze test results to better detect the presence and proper concentrations of drug compounds . The FDA requires full transparency and replicability for all analytical tests involved in the manufacture of new drugs . In the past this has involved providing lists of formulas and methods for analyzing test results (e.g. chromatography tests). But questions remain about how the FDA would treat a new drug application (NDA) that relied on a complex black box machine learning model to maintain quality in the manufacturing process. Interpretable machine learning techniques could help address some of these questions.

Medical devices

This year, for the first time the FDA approved an artificial intelligence device . This marks a major milestone for medical devices using proprietary black box algorithms that can diagnose diseases from images. The device was approved through the FDA’s De Novo premarket review pathway  which provides a review process for novel devices that represent a low to moderate risk. The low to moderate risk classification is key to a successful De Novo review. But the FDA has yet to approve a device determined to have a high potential risk to patient outcomes. For example, a diagnostic algorithm where a false positive could lead to an invasive and risky procedure. Extra controls would likely be needed on such an algorithm and with the latest model interpretability techniques, it may be possible to have those additional checks.

Another possibility for bringing machine learning into medical devices came about in late 2016 when congress passed the 21st Century Cures Act . The act excludes what is commonly referred to as clinical decision support (CDS) software from FDA purview under certain conditions; namely, that the healthcare provider using the software can independently review the basis for the software’s recommendation. In December 2017 the FDA published guidance  stating that “the sources supporting the recommendation or underlying the rationale for the recommendation should be identified and easily accessible to the intended user, understandable by the intended user (e.g., data points whose meaning is well understood by the intended user) …” Traditional machine learning software would not meet this criterion due to the black box nature of most machine learning models. However, with recent advances in interpretability, it is possible to display explanations for every decision made by a machine learning model, potentially enabling a user to verify the soundness of the rationale behind the automated recommendation.

Risk based guidance

Much attention has been given to hospital readmissions since passage of the Affordable Care Act and the beginning of the Hospital Readmissions Reduction Program. Predictive models developed with machine learning have been shown to be successful at predicting avoidable hospital readmissions  and some health systems have already adopted machine learning based models successfully . At the same time, interest into the use of machine learning to develop automated fraud and waste detection on incoming medical claims has been growing amongst government entities and private insurance companies . Now it should be possible for these models to explain their decisions to practitioners, payers, and patients, allowing users to investigate the actual reasons behind automated medical decision making and to determine if an individual decision was reasonable or could be improved.

Toward the Application of Interpretable Machine Learning in Healthcare

Since more deliberations about the ethical, medical, and economic implications of interpretable machine learning in healthcare are certainly necessary, an example risk based guidance use case has been provided for the sake of furthering such discussions. The example use case should be similar to the methods that organizations are already using for predicting 30-day readmissions, but instead of using an older linear modeling approach, the example uses a nonlinear, “white box” machine learning approach to achieve about a 1% increase in readmission prediction accuracy. Explanatory techniques are then used to describe both the internal mechanisms of the model and every prediction the model makes.

It is left to practitioners and domain experts to determine whether the example techniques truly surpass more established methods by any number of criteria, e.g. ability to handle heterogeneous data, accuracy, or interpretability. The only explicit argument made here is: when people’s lives are being affected by mathematical models, it does seem prudent to investigate and evaluate potentially impactful new modeling and analysis techniques.

The open source and freely available example use case is available here:

About the Authors

Andrew Langsner  is a Co-founder and a Managing Partner at Sphaeric.ai. He is an experienced problem solver with a passion for data-driven decision making. Andrew is always exploring ways to make advanced analytics valuable to businesses and organizations. He holds an MPP from Georgetown University. Continue the conversation online with Andrew on Linkedin .

Patrick Hall  is a senior director for data science  products at H2O.ai where he focuses mainly on model interpretability and model management. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Patrick studied math and computational chemistry before graduating from the Institute for Advanced Analytics at NCSU. Continue the conversation online with him on Linkedin , Twitter , or Quora .

 headshot

H2O.ai Team

At H2O.ai, democratizing AI isn’t just an idea. It’s a movement. And that means that it requires action. We started out as a group of like minded individuals in the open source community, collectively driven by the idea that there should be freedom around the creation and use of AI.

Today we have evolved into a global company built by people from a variety of different backgrounds and skill sets, all driven to be part of something greater than ourselves. Our partnerships now extend beyond the open-source community to include business customers, academia, and non-profit organizations.