approved
Machine Learning Explainability Through Comprehensible Decision Trees

The role of decisions made by machine learning algorithms in our lives is ever increasing. In reaction to this phenomenon, the European General Data Protection Regulation establishes that citizens have the right to receive an explanation on automated decisions affecting them. For explainability to be scalable, it should be possible to derive explana- tions in an automated way. A common approach is to use simpler, more intuitive decision algorithms to build a surrogate model of the black-box model (for example a deep learning algorithm) used to make a decision. Yet, there is a risk that the surrogate model is too large for it to be really comprehensible to humans. We focus on explaining black-box models by using decision trees of limited size as a surrogate model. Specifically, we propose an approach based on microaggregation to achieve a trade-off between comprehensibility and representativeness of the surrogate model on the one side and privacy of the subjects used for training the black-box model on the other side.

Tags
Data and Resources
To access the resources you must log in
  • BibTeXBibTeX

    The resource: 'BibTeX' is not accessible as guest user. You must login to access it!
  • htmlHTML

    The resource: 'html' is not accessible as guest user. You must login to access it!
Additional Info
Field Value
Creator Domingo-Ferrer, Josep
Creator Blanco-Justicia, Alberto, [email protected], orcid.org/0000-0002-1108-8082
DOI https://doi.org/10.1007/978-3-030-29726-8_2
Group Social Impact of AI and explainable ML
Group Ethics and Legality
Publisher Springer
Source In: International Cross-Domain Conference for Machine Learning and Knowledge Extraction. Springer, Cham, 2019. S. 15-26.
Thematic Cluster Privacy Enhancing Technology [PET]
system:type BookChapter
Management Info
Field Value
Author Pozzi Giorgia
Maintainer Pozzi Giorgia
Version 1
Last Updated 16 September 2023, 10:08 (CEST)
Created 18 February 2021, 01:54 (CET)