April 2nd, 2020

Brief Perspective on Key Terms and Ideas in Responsible AI

RSS icon RSS Category: Data Science, Explainable AI, Machine Learning, Responsible AI

INTRODUCTION

As fields like explainable AI and ethical AI have continued to develop in academia and industry, we have seen a litany of new methodologies that can be applied to improve our ability to trust and understand our machine learning and deep learning models. As a result of this, we’ve seen several buzzwords emerge. In this short post, we look to define these newish terms as H2O.ai sees them in hopes of fostering discussions between machine learning practitioners and researchers, and all the diverse types of professionals (e.g., social scientists, lawyers, risk specialists) it takes to make machine projects successful. We’ll close by discussing responsible machine learning as an umbrella term and by asking for your feedback. You can also watch this webinar for a deeper dive. 

VOCABULARY QUIZ

Explainable AI (XAI):  The ability to explain a model after it has been developed

One of the initial criticisms around machine learning was the inability to do a robust post-hoc analysis of the model and how it came to the conclusions that it did. Explainability refers to our ability to quantify the decision-making weightings that the model ultimately landed on. 

Example: SHAP

Interpretable Machine Learning:  Transparent model architectures and increasing how intuitive and understandable ML models can be

By nature, some models are more intuitive to understand than others. A simple decision tree is significantly more interpretable for a person than a complex ensemble model or deep neural network. When we discuss interpretability, we are referring to how easy to understand and describe the inner workings of the model architecture is. 

Example: Explainable Boosting Machines

Ethical AI:  Sociological fairness in machine learning predictions (i.e., whether one category of person is being weighted unequally)

Financial Services, in the US, companies have been required for a long time to be able to prove that their algorithm driven decisions did not treat one demographic of a person more unfairly than another (and explain how they know), when we consider Ethical or Fair AI, this is what we are describing for the most part. Whether it is ethnicity, gender, age, income, geographic location or otherwise, we aim to increase organizations’ understanding and confirmation that they are not perpetuating discrimination with their algorithms.

Example: AIF360

Secure AI:  Debugging and deploying ML models with similar counter-measures against insider and cyber threats as would be seen in traditional software

Machine learning models and algorithms face cybersecurity threats just as software and traditional technology do. When we discuss AI Security, we are looking to understand how “at risk” your model is to data poisoning, model hacking or other emerging threats to machine learning ecosystems.

Example: cleverhans

Human-Centered AI:  User interactions with AI and ML systems

AI is often designed and described as the opportunity to replicate and replace human tasks. Removing people from the process completely, however, is not a responsible approach to deploying AI at scale. We look to define Human-Centered AI as the level of human interaction and involvement that can be had in your AI program. This is essentially the UI and UX of AI.

Example: What-if Tool

RESPONSIBLE AI

Responsible AI is perhaps an even newer phrase that we, along with others, are starting to use as an umbrella term for all the different sub-disciplines mentioned above. We also see compliance, whether that’s with GDPR, CCPA, FCRA, ECOA or other regulations, as an additional and crucial aspect of responsible AI. 

Figure: A Venn diagram for Responsible AI.

To summarize, we have not developed this list to be perfect, complete or a single source of truth. We put this out to help define a list of critical industry terminology as we view them at H2O.ai with respect to our research and products. If you have ideas, critiques, or otherwise, we welcome conversations on the subject.  It is evolving quickly, and we aim to evolve with it.

About the Author

Benjamin Cox

Ben Cox is a Director of Product Marketing at H2O.ai where he helps lead Responsible AI market research and thought leadership. Prior to H2O.ai, Ben held data science roles in high-profile teams at Ernst & Young, Nike, and NTT Data. Ben holds a MBA from the University of Chicago Booth School of Business with multiple analytics concentrations and a BS in Economics from the College of Charleston.

Leave a Reply

What are we buying today?

Note: this is a guest blog post by Shrinidhi Narasimhan. It’s 2021 and recommendation engines are

July 5, 2021 - by Rohan Rao
The Emergence of Automated Machine Learning in Industry

This post was originally published by K-Tech, Centre of Excellence for Data Science and AI,

June 30, 2021 - by Parul Pandey
What does it take to win a Kaggle competition? Let’s hear it from the winner himself.

In this series of interviews, I present the stories of established Data Scientists and Kaggle

June 14, 2021 - by Parul Pandey
Snowflake on H2O.ai
H2O Integrates with Snowflake Snowpark/Java UDFs: How to better leverage the Snowflake Data Marketplace and deploy In-Database

One of the goals of machine learning is to find unknown predictive features, even hidden

June 9, 2021 - by Eric Gudgion
Getting the best out of H2O.ai’s academic program

“H2O.ai provides impressively scalable implementations of many of the important machine learning tools in a

May 19, 2021 - by Ana Visneski and Jo-Fai Chow
Regístrese para su prueba gratuita y podrá explorar H2O AI Hybrid Cloud

Recientemente, lanzamos nuestra prueba gratuita de 14 días de H2O AI Hybrid Cloud, lo que

May 17, 2021 - by Ana Visneski and Jo-Fai Chow

Start your 14-day free trial today