April 2nd, 2020

Brief Perspective on Key Terms and Ideas in Responsible AI

RSS icon RSS Category: Data Science, Explainable AI, Machine Learning, Responsible AI

INTRODUCTION

As fields like explainable AI and ethical AI have continued to develop in academia and industry, we have seen a litany of new methodologies that can be applied to improve our ability to trust and understand our machine learning and deep learning models. As a result of this, we’ve seen several buzzwords emerge. In this short post, we look to define these newish terms as H2O.ai sees them in hopes of fostering discussions between machine learning practitioners and researchers, and all the diverse types of professionals (e.g., social scientists, lawyers, risk specialists) it takes to make machine projects successful. We’ll close by discussing responsible machine learning as an umbrella term and by asking for your feedback. You can also watch this webinar for a deeper dive. 

VOCABULARY QUIZ

Explainable AI (XAI):  The ability to explain a model after it has been developed

One of the initial criticisms around machine learning was the inability to do a robust post-hoc analysis of the model and how it came to the conclusions that it did. Explainability refers to our ability to quantify the decision-making weightings that the model ultimately landed on. 

Example: SHAP

Interpretable Machine Learning:  Transparent model architectures and increasing how intuitive and understandable ML models can be

By nature, some models are more intuitive to understand than others. A simple decision tree is significantly more interpretable for a person than a complex ensemble model or deep neural network. When we discuss interpretability, we are referring to how easy to understand and describe the inner workings of the model architecture is. 

Example: Explainable Boosting Machines

Ethical AI:  Sociological fairness in machine learning predictions (i.e., whether one category of person is being weighted unequally)

Financial Services, in the US, companies have been required for a long time to be able to prove that their algorithm driven decisions did not treat one demographic of a person more unfairly than another (and explain how they know), when we consider Ethical or Fair AI, this is what we are describing for the most part. Whether it is ethnicity, gender, age, income, geographic location or otherwise, we aim to increase organizations’ understanding and confirmation that they are not perpetuating discrimination with their algorithms.

Example: AIF360

Secure AI:  Debugging and deploying ML models with similar counter-measures against insider and cyber threats as would be seen in traditional software

Machine learning models and algorithms face cybersecurity threats just as software and traditional technology do. When we discuss AI Security, we are looking to understand how “at risk” your model is to data poisoning, model hacking or other emerging threats to machine learning ecosystems.

Example: cleverhans

Human-Centered AI:  User interactions with AI and ML systems

AI is often designed and described as the opportunity to replicate and replace human tasks. Removing people from the process completely, however, is not a responsible approach to deploying AI at scale. We look to define Human-Centered AI as the level of human interaction and involvement that can be had in your AI program. This is essentially the UI and UX of AI.

Example: What-if Tool

RESPONSIBLE AI

Responsible AI is perhaps an even newer phrase that we, along with others, are starting to use as an umbrella term for all the different sub-disciplines mentioned above. We also see compliance, whether that’s with GDPR, CCPA, FCRA, ECOA or other regulations, as an additional and crucial aspect of responsible AI. 

Figure: A Venn diagram for Responsible AI.

To summarize, we have not developed this list to be perfect, complete or a single source of truth. We put this out to help define a list of critical industry terminology as we view them at H2O.ai with respect to our research and products. If you have ideas, critiques, or otherwise, we welcome conversations on the subject.  It is evolving quickly, and we aim to evolve with it.

About the Author

Benjamin Cox

Ben Cox is a Director of Product Marketing at H2O.ai where he helps lead Responsible AI market research and thought leadership. Prior to H2O.ai, Ben held data science roles in high-profile teams at Ernst & Young, Nike, and NTT Data. Ben holds a MBA from the University of Chicago Booth School of Business with multiple analytics concentrations and a BS in Economics from the College of Charleston.

Leave a Reply

Using AI to unearth the unconscious bias in job descriptions

“Diversity is the collective strength of any successful organization Unconscious Bias in Job Descriptions Unconscious bias affects

January 19, 2021 - by Parul Pandey and Shivam Bansal
H2O Driverless AI 1.9.1: Continuing to Push the Boundaries for Responsible AI

At H2O.ai, we have been busy. Not only do we have our most significant new

January 18, 2021 - by Benjamin Cox
Meet the Data Scientist who just cannot stop winning on Kaggle.

In conversation with Philipp Singer: A Data Scientist, Kaggle Double Grandmaster, and a Ph.D. in

January 15, 2021 - by Parul Pandey
Liqui.do Speeds Credit Scoring for Fair Lending with H2O.ai

Liqui.do is a technological and innovative company developing a platform for leasing equipment for small

January 12, 2021 - by Eve-Anne Tréhin
New Improvements in H2O 3.32.0.2

There is a new minor release of H2O that introduces two useful improvements to our

December 17, 2020 - by Veronika Maurerova
Introducing H2O Wave

For almost a decade, H2O.ai has worked to build open source and commercial products that

December 15, 2020 - by Jo-Fai Chow and Benjamin Cox

Join the AI Revolution

Subscribe, read the documentation, download or contact us.

Subscribe to the Newsletter

Start Your 21-Day Free Trial Today

Get It Now
Desktop img