Machine Learning Interpretability / Explainability

Ready

What is Machine Learning Interpretability / Explainability?

Interpretability and explainability are often used interchangeably in machine learning.

Explainable AI (XAI):  The ability to explain a model after it has been developed. One of the initial criticisms around machine learning was the inability to do a robust post-hoc analysis of the model and how it came to the conclusions that it did. Explainability refers to our ability to quantify the decision-making weightings that the model ultimately landed on.

Interpretable Machine Learning:  Transparent model architectures and increasing how intuitive and understandable ML models can be by nature, some models are more intuitive to understand than others. A simple decision tree is significantly more interpretable for a person than a complex ensemble model or deep neural network. When we discuss interpretability, we are referring to how easy to understand and describe the inner workings of the model architecture is.

Driverless AI provides a view that employs a host of various techniques and methodologies for interpreting and explaining model predictions. Different charts are automatically generated including K-LIME, Shapley, Variable Importance, Decision Tree Surrogate, Partial Dependence, Sensitivity Analysis and more. 

Resources

Tutorials

Videos


#mli
#driverless-ai
#concepts
0 comments
1 Views
 

Permalink

Related Links

No Related Resource entered.