-
Explanation of Deep Models with Limited Interaction for Trade Secret and Priv...
An ever-increasing number of decisions affecting our lives are made by algorithms. For this reason, algorithmic transparency is becoming a pressing need: automated decisions... -
Fair Transparent and Accountable Algorithmic Decision making Processes
The Premise, the Proposed Solutions, and the Open Challenges The combination of increased availability of large amounts of fine-grained human behavioral data and advances in... -
Explaining Image Classifiers Generating Exemplars and Counter-Exemplars from ...
We present an approach to explain the decisions of black-box image classifiers through synthetic exemplar and counter-exemplar learnt in the latent feature space. Our...-
HTML
The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
-
HTML
-
Evaluating local explanation methods on ground truth
Evaluating local explanation methods is a difficult task due to the lack of a shared and universally accepted definition of explanation. In the literature, one of the most...-
HTML
The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
-
HTML
-
Explanation in artificial intelligence. Insights from the social sciences
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms.... -
Reducing Graph Structural Bias by Adding shortcut edges
Algorithms that tackle the problem of minimizing average/maximum hitting time (BMAH/BMMH) between different social network groups, given fixed shortcut edges. The...-
Data
The resource: 'Social network dataset' is not accessible as guest user. You must login to access it!
-
Data
-
Visualizing the Results of Boolean Matrix Factorizations
We provide a method to visualize the results of Boolean Matrix Factorization algorithms. Our method can also be used to visualize overlapping clusters in bipartite graphs. The... -
Grounds for Trust. Essential Epistemic Opacity and Computational Reliabilism
Several philosophical issues in connection with computer simulations rely on the assumption that results of simulations are trustworthy. Examples of these include the debate... -
Fair Prediction with Disparate Impact A Study of Bias in Recidivism Predictio...
Recidivism prediction instruments (RPIs) provide decision-makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time.... -
Solving the Black Box Problem. A Normative Framework for Explainable Artifici...
Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial... -
XAI Method for explaining time-series
LASTS is a framework that can explain the decisions of black box models for time series classification. The explanation consists of factual and counterfactual rules revealing... -
Toward Accountable Discrimination Aware Data Mining
"Big Data" and data-mined inferences are affecting more and more of our lives, and concerns about their possible discriminatory effects are growing. Methods for... -
Fairer machine learning in the real world
Mitigating discrimination without collecting sensitive data Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used... -
Machine Learning Explainability Through Comprehensible Decision Trees
The role of decisions made by machine learning algorithms in our lives is ever increasing. In reaction to this phenomenon, the European General Data Protection Regulation... -
GLocalX - Explaining in a Local to Global setting
GLocalX is a model-agnostic Local to Global explanation algorithm. Given a set of local explanations expressed in the form of decision rules, and a black-box model to explain,... -
Algorithmic Decision Making Based on Machine Learning from Big Data
Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would... -
Predicting and Explaining Privacy Risk Exposure in Mobility Data
Mobility data is a proxy of different social dynamics and its analysis enables a wide range of user services. Unfortunately, mobility data are very sensitive because the... -
MARLENA
MARLENA is novel technique able to explain the reasons behind any black-box multi-label classifier decision. It will generate an explanation in the form of a decision rule....-
python
The resource: 'Marlena code' is not accessible as guest user. You must login to access it!
-
python
-
Heterogeneous Document Embeddings for Cross-Lingual Text Classification
Funnelling (Fun) is a method for cross-lingual text classification (CLC) based on a two-tier ensemble for heterogeneous transfer learning. In Fun, 1st-tier classifiers, each...-
HTML
The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
-
HTML