Background
Responsible AI is the natural next step of Explainable AI and incorporates best practices for people and process as well as the technology. Beyond regulatory compliance, companies deploying machine learning in production have real business interests in understanding their models and trusting that models will perform as expected and that models are not discriminatory. H2O Driverless AI, an industry leading automatic machine learning platform, provides multiple visualizations and mathematical methods for explaining machine learning models and testing them for discrimination and instability.
To understand, it is important to delineate between some key terms. Explainable AI (XAI) is the ability to explain a model after it has been developed. Machine learning Interpretability refers to transparent model architectures and increasing how intuitive and understandable ML models can be. Disparate impact analysis is an Ethical AI technique that enables users to test whether demographic groups are being treated differently by a model, and sensitivity analysis is a model governance approach that allows users to test a model under interesting conditions, such as a simulated recession or natural disaster. Each piece of this functionality is a critical pillar of Responsible AI.
Responsible AI
The H2O.ai mission is deeply rooted in deploying AI responsibly. This whitepaper will go over critical functionalities in our platform such as Machine Learning Interpretability, Explainable AI, Disparate Impact Testing, and Sensitivity Analysis to allow users to make responsible decisions with respect to their AI.

Why Does it Matter?
H2O Driverless AI provides market leading responsible AI technology methodologies to address questions associated with trust and understanding in machine learning. The interpretable models and XAI components of Driverless AI enables the data practitioner to get clear and concise explanations of the model results by generating multiple dynamic metrics and visualizations into a dashboard, these interactive charts can be used to visualize and debug a model by comparing the displayed global and local model decision-process, important variables, and important interactions to known standards, domain knowledge, and reasonable expectations. Disparate impact and sensitivity analysis enable users to test models for discrimination and instability problems, respectively.
Bringing AI To Enterprise
Business Needs:
Responsible AI facilitates the business adoption of machine learning by:
- Enhancing understanding of complex model mechanisms
- Mitigating risk of deploying black-box models at scale
- Providing deeper insights into patterns and behavior in your data
- Increasing trust that models are stable
- Facilitating fairness by automatically testing for discrimination
- Assisting with regulatory compliance
Benefits
- Automatically generates numerous charts and measurements for model debugging
- Near complete transparency & accountability for model mechanisms, predictions, and decisions
- Smart data visualization techniques portray high dimensional feature interactions in just two dimensions
- Reason codes in plain English, for easy understanding and potential regulatory compliance
- Understanding feature importance globally vs. locally
- Increased insight into how sensitive your models are to data drift & isolated data changes
- Visually displaying the highest and lowest probability prediction paths with decision trees
- Clear alerts for detected discrimination across multiple measures
- Simulation and explanation by engaging a model with what-if analysis
Start Your 21-Day Free Trial Today
Get It Now