KuppingerCole Report
Leadership Brief
Explainable AI
One of the largest barriers to widespread machine learning (ML) adoption is its lack of explainability. Most ML models are not inherently explainable on a local level, meaning that the model cannot provide any reasoning to support individual decisions. The academic and private sectors are very active in developing solutions to the explainability issue, and this Leadership Brief introduces the main methods that make AI explainable.
1 Executive Summary
The persistent weakness of machine learning (ML) models is the lack of explainability for individual decisions. These models are often described as ...
Login Free 30-day Select Access Get full Access2 Analysis
Academic Contributions to Explainable AI
Feature attribution solutions, otherwise known as saliency maps, are one popular method of retrospectively ...
Login Free 30-day Select Access Get full Access3 Recommendations
Choose Your Explainability Solution Based on Your ML Model
Feature attribution is the most common explainability solution. However, it is not applic ...
Login Free 30-day Select Access Get full Access