Return to page

BLOG

The Importance of Explainable AI

 headshot

By H2O.ai Team | minute read | October 30, 2020

Blog decorative banner image

This blog post was written by  Nick Patience, Co-Founder & Research Director, AI Applications & Platforms at 451 Research, a part of S&P Global Market Intelligence 

From its inception in the mid-twentieth century, AI technology has come a long way. What was once purely the topic of science fiction and academic discussion is now a widespread technology being adopted by enterprises across the world. AI is versatile, with applications ranging from drug discovery and patient data analysis to fraud detection, customer engagement, and workflow optimization. The technology’s scope is indisputable, and companies looking to stay ahead are increasingly adopting it into their business operations.

That being said, AI systems are notorious for their ‘black-box’ nature, leaving many users without visibility into how or why decisions have been made. This is where explainable AI comes into play. Explainable AI is employed to make AI decisions both understandable and interpretable by humans. According to 451 Research’s Voice of the Enterprise: AI and Machine Learning Use Cases 2020, 92% of enterprises believe that explainable AI is important; however, less than half of them have built or purchased explainability tools for their AI systems. This leaves them open to significant risk; without a human looped into the development process, AI models can generate biased outcomes that may lead to both ethical and regulatory compliance issues later.

So why haven’t more companies incorporated explainability tools into their AI strategy to mitigate this risk? One reason for this may simply be a lack of available tools, features, and stand-alone products. The industry has been slow to adapt to this critical issue, in part due to the long-standing belief held by many data scientists that explainability is traded for accuracy in AI models. This however is a misconception; visibility into the AI decisioning process allows users to screen their data and algorithms for bias and deviation, thus producing accurate and robust outcomes that can easily be explained to customers and regulators.

Many AI implementations – particularly in the healthcare and financial sectors – deal with personal data, and customers need to know that this data is being handled with the utmost care and sensitivity. In Europe, the General Data Protection Regulation (GDPR) requires companies to provide customers with an explanation of decisions made by AI, and similar regulations exist in countries across the globe. With explainable AI systems, companies can show customers exactly where data is coming from and how it’s being used, meeting these regulatory requirements and building trust and confidence over time.

As companies map out their AI strategies, explainability should be a central consideration to safeguard against unnecessary risk while maximizing business value.

For more information on explainable AI, check out our recent report ‘Driving Value with Explainable AI’ .

 headshot

H2O.ai Team

At H2O.ai, democratizing AI isn’t just an idea. It’s a movement. And that means that it requires action. We started out as a group of like minded individuals in the open source community, collectively driven by the idea that there should be freedom around the creation and use of AI.

Today we have evolved into a global company built by people from a variety of different backgrounds and skill sets, all driven to be part of something greater than ourselves. Our partnerships now extend beyond the open-source community to include business customers, academia, and non-profit organizations.