-
Solving the Black Box Problem. A Normative Framework for Explainable Artifici...
Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial... -
XAI Method for explaining time-series
LASTS is a framework that can explain the decisions of black box models for time series classification. The explanation consists of factual and counterfactual rules revealing... -
Modeling Adversarial Behavior Against Mobility Data Privacy
Privacy risk assessment is a crucial issue in any privacy-aware analysis process. Traditional frameworks for privacy risk assessment systematically generate the assumed...-
HTML
The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
-
HTML
-
Toward Accountable Discrimination Aware Data Mining
"Big Data" and data-mined inferences are affecting more and more of our lives, and concerns about their possible discriminatory effects are growing. Methods for... -
Geolet
Geolet is a Python library that offers an interpretable transformation and classification approach for trajectory data. Geolet first partitions trajectories into multiple... -
Fairer machine learning in the real world
Mitigating discrimination without collecting sensitive data Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used... -
Machine Learning Explainability Through Comprehensible Decision Trees
The role of decisions made by machine learning algorithms in our lives is ever increasing. In reaction to this phenomenon, the European General Data Protection Regulation... -
GLocalX - Explaining in a Local to Global setting
GLocalX is a model-agnostic Local to Global explanation algorithm. Given a set of local explanations expressed in the form of decision rules, and a black-box model to explain,... -
Algorithmic Decision Making Based on Machine Learning from Big Data
Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would... -
Predicting and Explaining Privacy Risk Exposure in Mobility Data
Mobility data is a proxy of different social dynamics and its analysis enables a wide range of user services. Unfortunately, mobility data are very sensitive because the... -
MARLENA
MARLENA is novel technique able to explain the reasons behind any black-box multi-label classifier decision. It will generate an explanation in the form of a decision rule....-
python
The resource: 'Marlena code' is not accessible as guest user. You must login to access it!
-
python
-
Temporal social network reconstruction using wireless proximity sensors: mode...
The emerging technologies of wearable wireless devices open entirely new ways to record various aspects of human social interactions in a broad range of settings. Such...-
HTML
The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
-
HTML
-
Heterogeneous Document Embeddings for Cross-Lingual Text Classification
Funnelling (Fun) is a method for cross-lingual text classification (CLC) based on a two-tier ensemble for heterogeneous transfer learning. In Fun, 1st-tier classifiers, each...-
HTML
The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
-
HTML
-
How the machine thinks. Understanding opacity in machine learning algorithms
This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud... -
LORE
The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of...-
python
The resource: 'LORE code' is not accessible as guest user. You must login to access it!
-
python
-
Seeing without knowing. Limitations of transparency and its application to al...
Models for understanding and holding systems accountable have long rested upon ideals and logics of transparency. Being able to see a system is sometimes equated with being able... -
Focus Metric
https://github.com/HPAI-BSC/Focus-Metric Implementation of the Focus metric. This metric is able to evaluate explainability methods and quantify their coherency to the task...-
github
The resource: 'Focus Metric repository' is not accessible as guest user. You must login to access it!
-
github
-
GLocalX-From Local to Global Explanations of Black Box AI Models
Artificial Intelligence (AI) has come to prominence as one of the major components of our society, with applications in most aspects of our lives. In this field, complex and...-
HTML
The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
-
HTML
-
The PGM-index a fully-dynamic compressed learned index with provable worst-ca...
We present the first learned index that supports predecessor, range queries and updates within provably efficient time and space bounds in the worst case. In the (static)...-
PDF
The resource: 'link to publication' is not accessible as guest user. You must login to access it!
-
PDF