Return to page

BLOG

What is Your AI Thinking? Part 2

 headshot

By H2O.ai Team | minute read | January 24, 2019

Blog decorative banner image

Explaining AI to the Business Person

Welcome to part 2 of our blog series: What is Your AI Thinking? We will explore some of the most promising testing methods for enhancing trust in AI and machine learning models and systems. We will also cover the best practice of model documentation from a business and regulatory standpoint.

More Techniques

Model Debugging and Sensitivity Analysis

Understanding and trust are intrinsically linked, and ideally we want to both trust and understand any deployed AI system. While explanatory techniques are mostly about increasing understanding of AI and machine learning models and systems, model debugging is about enhancing trust in those same systems by testing them in real-life and simulated scenarios. Sensitivity analysis, also known as scenario analysis or “what-if” analysis, is probably the best-known method for testing the behavior of machine learning models.

Sensitivity Analysis  – How will your model behave in the next market boom or bust? What if it encounters data it never learned about during its training process? Is it easy to hack or game the AI system you’ve created? Sensitivity analysis helps provide answers to all of these questions the old-fashioned way: by testing these scenarios explicitly. In sensitivity analysis, data is generated that replicates a scenario of interest: a recession, unseen data, or a hacking attempt, and then model behavior on this data is analyzed. If your model is not passing these tests in a way you are comfortable with, send yourself or your team back to the drawing board!

Fairness and Disparate Impact Analysis

Sociological fairness in machine learning is an incredibly important, but highly complex subject. In a real-world machine learning project the hard-to-define phenomena of unfairness can materialize in many ways and from many different sources. However, there is a practical way to discuss and handle observational fairness, or how your model predictions affect different groups of people. This is known as disparate impact analysis.

Disparate Impact Analysis –  Disparate impact analysis is a fairly straightforward method that quantifies your model’s predictions across sensitive demographic segments like ethnicity, gender, disability status or other potentially interesting groups of observations. Disparate impact analysis is also an accepted, regulation-compliant tool for fair-lending purposes in the U.S. financial services industry. If it’s good enough for multibillion-dollar credit portfolios, it’s probably good enough for your project! Also, why risk being called out in the media for training an unfair model? And why not do the right thing and investigate how your model treats people?

Tie It all Together with Model Documentation

Along with a strong global and local understanding of your model and data, trust in its future behavior, and assurances of fairness, model interpretability is also about minimizing financial risk. Large financial services companies have been calculating and documenting information similar to that described above with this goal in mind for years.

Model Documentation  – Model documentation is required in some industries but represents a best practice for all. Documentation should include essential information about machine learning models including:

  • The creation date and creator of the model
  • The model’s intended business purpose
  • A description of the input dataset
  • Description of the algorithm(s) used for: Data preparation and model training
  • Final model tuning parameters
  • Model validation steps
  • Results from explanatory techniques
  • Results from disparate impact analysis
  • Results from sensitivity analysis
  • Who to contact when a model causes problems
  • Ideas about how to fix any potential problems

All of this information can be given to a data scientist, internal validators, or external regulators to understand precisely how the model was generated and what to do if it ever causes problems.

Interpretable models, explanations, model debugging, fairness techniques and model documentation are being pursued by researchers and software vendors…today! In Part 3 of this blog series learn how to use H2O Driverless AI  to get a jump on your competition by automatically building low-risk, high-accuracy, and high-interpretability machine learning models.

This blog is second in a 3-part series. You can catch the first part here  and the third part here .

 headshot

H2O.ai Team

At H2O.ai, democratizing AI isn’t just an idea. It’s a movement. And that means that it requires action. We started out as a group of like minded individuals in the open source community, collectively driven by the idea that there should be freedom around the creation and use of AI.

Today we have evolved into a global company built by people from a variety of different backgrounds and skill sets, all driven to be part of something greater than ourselves. Our partnerships now extend beyond the open-source community to include business customers, academia, and non-profit organizations.