approved
Explaining Any Time Series Classifier

We present a method to explain the decisions of black box models for time series classification. The explanation consists of factual and counterfactual shapelet-based rules revealing the reasons for the classification, and of a set of exemplars and counter-exemplars highlighting similarities and differences with the time series under analysis. The proposed method first generates exemplar and counter-exemplar time series in the latent feature space and learns a local latent decision tree classifier. Then, it selects and decodes those respecting the decision rules explaining the decision. Finally, it learns on them a shapelet-tree that reveals the parts of the time series that must, and must not, be contained for getting the returned outcome from the black box. A wide experimentation shows that the proposed method provides faithful, meaningful and interpretable explanations.

Tags
Data and Resources
To access the resources you must log in
Additional Info
Field Value
Creator Giannotti, Fosca, [email protected]
Creator Pedreschi, Dino, [email protected]
Creator Spinnato, Francesco, [email protected]
Creator Monreale, Anna, [email protected]
Creator Guidotti, Riccardo, [email protected]
DOI 10.1109/CogMI50398.2020.00029
Group Social Impact of AI and explainable ML
Publisher 2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)
Source 2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) 28-31 Oct 2020
Thematic Cluster Social Data [SD]
system:type ConferencePaper
Management Info
Field Value
Author Wright Joanna
Maintainer Guidotti Riccardo
Version 1
Last Updated 16 September 2023, 10:13 (CEST)
Created 1 April 2021, 05:49 (CEST)