October 7th, 2020

3 Ways to Ensure Responsible AI Tools are Effective

RSS icon RSS Category: Driverless AI, Explainable AI, Machine Learning, Machine Learning Interpretability, Responsible AI

Since we began our journey making tools for explainable AI (XAI) in late 2016, we’ve learned many lessons, and often the hard way. Through headlines, we’ve seen others grapple with the difficulties of deploying AI systems too. Whether it’s:

AI can affect people — and in various and harmful ways. One of the most significant lessons we’ve learned is that there’s more to being responsible with AI than just explainability of ML models, or even technology in general. 

There’s a lot to consider when mitigating the wide variety of risks presented by new AI systems. We claim that responsible AI combines XAI, interpretable machine learning (ML) models, data and machine learning security, discrimination testing and remediation, human-centered software interfaces, and lawfulness and compliance. It’s been tough to hold our evolving responsible AI software to this high bar, but we also continue to make progress toward these lofty goals.  

To breakdown what we’ve learned a bit more, here are the basics of how we think responsible AI tools are most effective:

1. They empower non-technical consumers and the public to engage and challenge AI systems.

Seemingly well-built systems can cause big problems when their results are presented to users as a final, opaque, and unappealable outcome. As is already mandated in the US consumer finance vertical, use responsible AI tools to tell your users how your AI system makes decisions and let them appeal those decisions when the system is inevitably wrong. It would be even more ideal to allow users to fully engage with AI systems, through appropriate graphical interfaces, to satisfy their basic human curiosity about how these impactful technologies work.

2. They enable engineers and scientists to test and debug AI systems.

Users will probably never understand all the details of a contemporary AI system, but the engineers and data scientists that design and build the system should. This means building interpretable and explainable AI systems that enable exhaustive testing and monitoring for:

  • algorithmic discrimination
  • privacy harms
  • security vulnerabilities
  • prediction quality and stability
  • basic logical flaws
  • software and hardware failures

Think about it like this: I don’t understand the structural engineering of the I-395 bridge in Washington DC, but along with millions of people, I put my trust in the engineers who designed and built the bridge. AI will likely be as important to us as bridges one day. Let’s start respecting the risks of AI sooner rather than later.

3. They allow decision-makers to review and evaluate AI systems.

One of the best controls for AI systems is human oversight and review. This is why major financial firms have chief model risk officers and multiple lines of human validation, beyond data science and engineering teams, for their AI systems. Much like enabling appeals for consumers, AI systems need interfaces that empower business leaders, attorneys, and compliance personnel to evaluate the business value, the reputational risks, potential litigation, and lawfulness for an AI system. Well-meaning executives and oversight personnel can’t be accountable if they can’t get the necessary information about AI systems. So, AI systems must enable human-readable documentation or other appropriate executive review interfaces.   

If you’ve found this post helpful, I’ll be speaking about these topics later next week. I’m flattered to be included in a panel about “Tools for a More Responsible AI” with Orange Silicon Valley on October 14th, at 8:30am PT. You can register here.

About the Author

patrick hall
Patrick Hall

Patrick Hall is a senior director for data science products at H2O.ai where he focuses mainly on model interpretability. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2O.ai, Patrick held global customer facing roles and R & D research roles at SAS Institute. He holds multiple patents in automated market segmentation using clustering and deep neural networks. Patrick was the 11th person worldwide to become a Cloudera certified data scientist. He studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University.

Leave a Reply

Using AI to unearth the unconscious bias in job descriptions

“Diversity is the collective strength of any successful organization Unconscious Bias in Job Descriptions Unconscious bias affects

January 19, 2021 - by Parul Pandey and Shivam Bansal
H2O Driverless AI 1.9.1: Continuing to Push the Boundaries for Responsible AI

At H2O.ai, we have been busy. Not only do we have our most significant new

January 18, 2021 - by Benjamin Cox
Meet the Data Scientist who just cannot stop winning on Kaggle.

In conversation with Philipp Singer: A Data Scientist, Kaggle Double Grandmaster, and a Ph.D. in

January 15, 2021 - by Parul Pandey
Liqui.do Speeds Credit Scoring for Fair Lending with H2O.ai

Liqui.do is a technological and innovative company developing a platform for leasing equipment for small

January 12, 2021 - by Eve-Anne Tréhin
New Improvements in H2O 3.32.0.2

There is a new minor release of H2O that introduces two useful improvements to our

December 17, 2020 - by Veronika Maurerova
Introducing H2O Wave

For almost a decade, H2O.ai has worked to build open source and commercial products that

December 15, 2020 - by Jo-Fai Chow and Benjamin Cox

Join the AI Revolution

Subscribe, read the documentation, download or contact us.

Subscribe to the Newsletter

Start Your 21-Day Free Trial Today

Get It Now
Desktop img