Explainability and Interpretability
Blog: Decision Management Community
Explainability of decisions produced by machines is one of the hottest topic these days (see XAI). Explainable AI usually makes decisions using a complicated black box model, and uses a second (posthoc) model created to explain what the first model is doing. Interpretable AI concentrates on models that can themselves be directly inspected and interpreted by human experts. The recent paper “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead” shows the difference between explainability and interpretability, and states that the former may be problematic. Link
Leave a Comment
You must be logged in to post a comment.