October 30th, 2020

The Importance of Explainable AI

RSS icon RSS Category: Community, Machine Learning Interpretability, Responsible AI
Fallback Featured Image

This blog post was written by Nick Patience, Co-Founder & Research Director, AI Applications & Platforms at 451 Research, a part of S&P Global Market Intelligence

From its inception in the mid-twentieth century, AI technology has come a long way. What was once purely the topic of science fiction and academic discussion is now a widespread technology being adopted by enterprises across the world. AI is versatile, with applications ranging from drug discovery and patient data analysis to fraud detection, customer engagement, and workflow optimization. The technology’s scope is indisputable, and companies looking to stay ahead are increasingly adopting it into their business operations.

That being said, AI systems are notorious for their ‘black-box’ nature, leaving many users without visibility into how or why decisions have been made. This is where explainable AI comes into play. Explainable AI is employed to make AI decisions both understandable and interpretable by humans. According to 451 Research’s Voice of the Enterprise: AI and Machine Learning Use Cases 2020, 92% of enterprises believe that explainable AI is important; however, less than half of them have built or purchased explainability tools for their AI systems. This leaves them open to significant risk; without a human looped into the development process, AI models can generate biased outcomes that may lead to both ethical and regulatory compliance issues later.

So why haven’t more companies incorporated explainability tools into their AI strategy to mitigate this risk? One reason for this may simply be a lack of available tools, features, and stand-alone products. The industry has been slow to adapt to this critical issue, in part due to the long-standing belief held by many data scientists that explainability is traded for accuracy in AI models. This however is a misconception; visibility into the AI decisioning process allows users to screen their data and algorithms for bias and deviation, thus producing accurate and robust outcomes that can easily be explained to customers and regulators.

Many AI implementations – particularly in the healthcare and financial sectors – deal with personal data, and customers need to know that this data is being handled with the utmost care and sensitivity. In Europe, the General Data Protection Regulation (GDPR) requires companies to provide customers with an explanation of decisions made by AI, and similar regulations exist in countries across the globe. With explainable AI systems, companies can show customers exactly where data is coming from and how it’s being used, meeting these regulatory requirements and building trust and confidence over time.

As companies map out their AI strategies, explainability should be a central consideration to safeguard against unnecessary risk while maximizing business value.

For more information on explainable AI, check out our recent report ‘Driving Value with Explainable AI’.

Tags

Leave a Reply

Learning from others is imperative to success on Kaggle says this Turkish GrandMaster

In conversation with Fatih Öztürk: A Data Scientist and a Kaggle Competition Grandmaster. In this series

February 15, 2021 - by Parul Pandey
H2O-3 Improvements from Two University Projects

In September 2019 H2O.ai became a silver partner of the Faculty of Informatics at Czech

February 8, 2021 - by Veronika Maurerova
Data to Production Ready Models to Business Apps in Just a Few Steps

Building a Credit Scoring Model and Business App using H2O In the journey of a successful

February 5, 2021 - by Shivam Bansal
Using Python’s datatable library seamlessly on Kaggle

Managing large datasets on Kaggle without fearing about the out of memory error Datatable is a Python

February 3, 2021 - by Parul Pandey and Rohan Rao
Fallback Featured Image
Successful AI: Which Comes First, the Data or the Question?

Successful AI is a business process. Even the most sophisticated models, the latest algorithms, and highly

February 2, 2021 - by Ellen Friedman, PhD
Introducing H2O AI Hybrid Cloud

Organizations have made large investments in modernizing their data infrastructure and operations, but most still

January 26, 2021 - by Benjamin Cox and Jo-Fai Chow

Join the AI Revolution

Subscribe, read the documentation, download or contact us.

Subscribe to the Newsletter

Start Your 21-Day Free Trial Today

Get It Now
Desktop img