January 24th, 2019

What is Your AI Thinking? Part 2

RSS icon RSS Category: Data Science, Driverless AI, Explainable AI, Financial Services, Machine Learning Interpretability
Fallback Featured Image

Explaining AI to the Business Person

Welcome to part 2 of our blog series: What is Your AI Thinking? We will explore some of the most promising testing methods for enhancing trust in AI and machine learning models and systems. We will also cover the best practice of model documentation from a business and regulatory standpoint.

More Techniques

Model Debugging and Sensitivity Analysis

Understanding and trust are intrinsically linked, and ideally we want to both trust and understand any deployed AI system. While explanatory techniques are mostly about increasing understanding of AI and machine learning models and systems, model debugging is about enhancing trust in those same systems by testing them in real-life and simulated scenarios. Sensitivity analysis, also known as scenario analysis or “what-if” analysis, is probably the best-known method for testing the behavior of machine learning models.

Sensitivity Analysis – How will your model behave in the next market boom or bust? What if it encounters data it never learned about during its training process? Is it easy to hack or game the AI system you’ve created? Sensitivity analysis helps provide answers to all of these questions the old-fashioned way: by testing these scenarios explicitly. In sensitivity analysis, data is generated that replicates a scenario of interest: a recession, unseen data, or a hacking attempt, and then model behavior on this data is analyzed. If your model is not passing these tests in a way you are comfortable with, send yourself or your team back to the drawing board!

Fairness and Disparate Impact Analysis

Sociological fairness in machine learning is an incredibly important, but highly complex subject. In a real-world machine learning project the hard-to-define phenomena of unfairness can materialize in many ways and from many different sources. However, there is a practical way to discuss and handle observational fairness, or how your model predictions affect different groups of people. This is known as disparate impact analysis.

Disparate Impact Analysis – Disparate impact analysis is a fairly straightforward method that quantifies your model’s predictions across sensitive demographic segments like ethnicity, gender, disability status or other potentially interesting groups of observations. Disparate impact analysis is also an accepted, regulation-compliant tool for fair-lending purposes in the U.S. financial services industry. If it’s good enough for multibillion-dollar credit portfolios, it’s probably good enough for your project! Also, why risk being called out in the media for training an unfair model? And why not do the right thing and investigate how your model treats people?

Tie It all Together with Model Documentation

Along with a strong global and local understanding of your model and data, trust in its future behavior, and assurances of fairness, model interpretability is also about minimizing financial risk. Large financial services companies have been calculating and documenting information similar to that described above with this goal in mind for years.

Model Documentation – Model documentation is required in some industries but represents a best practice for all. Documentation should include essential information about machine learning models including:

  • The creation date and creator of the model
  • The model’s intended business purpose
  • A description of the input dataset
  • Description of the algorithm(s) used for: Data preparation and model training
  • Final model tuning parameters
  • Model validation steps
  • Results from explanatory techniques
  • Results from disparate impact analysis
  • Results from sensitivity analysis
  • Who to contact when a model causes problems
  • Ideas about how to fix any potential problems

All of this information can be given to a data scientist, internal validators, or external regulators to understand precisely how the model was generated and what to do if it ever causes problems.

Interpretable models, explanations, model debugging, fairness techniques and model documentation are being pursued by researchers and software vendors…today! In Part 3 of this blog series learn how to use H2O Driverless AI to get a jump on your competition by automatically building low-risk, high-accuracy, and high-interpretability machine learning models.

This blog is second in a 3-part series. You can catch the first part here and the third part here.

Leave a Reply

AI-Driven Predictive Maintenance with H2O Hybrid Cloud

According to a study conducted by Wall Street Journal, unplanned downtime costs industrial manufacturers an

August 2, 2021 - by Parul Pandey
What are we buying today?

Note: this is a guest blog post by Shrinidhi Narasimhan. It’s 2021 and recommendation engines are

July 5, 2021 - by Rohan Rao
The Emergence of Automated Machine Learning in Industry

This post was originally published by K-Tech, Centre of Excellence for Data Science and AI,

June 30, 2021 - by Parul Pandey
What does it take to win a Kaggle competition? Let’s hear it from the winner himself.

In this series of interviews, I present the stories of established Data Scientists and Kaggle

June 14, 2021 - by Parul Pandey
Snowflake on H2O.ai
H2O Integrates with Snowflake Snowpark/Java UDFs: How to better leverage the Snowflake Data Marketplace and deploy In-Database

One of the goals of machine learning is to find unknown predictive features, even hidden

June 9, 2021 - by Eric Gudgion
Getting the best out of H2O.ai’s academic program

“H2O.ai provides impressively scalable implementations of many of the important machine learning tools in a

May 19, 2021 - by Ana Visneski and Jo-Fai Chow

Start your 14-day free trial today