-
Privlib
Privlib is a Python software package to manage privacy risk and discrimination in tabular and sequential data. It comprises methods to assess privacy risk (PRUDEnce) and... -
Papers on Gender Bias in Academic Promotions
This dataset contains the result of a systematic mapping study conducted to analyse how the issue of gender bias in academic promotions has been addressed by the literature....-
CSV
The resource: 'Dataset' is not accessible as guest user. You must login to access it!
-
CSV
-
MANILA
MANILA is a low-code web application to support the specification and execution of machine learning fairness evaluations. In particular, through MANILA it is possible to... -
Democratizing Quality-Based Machine Learning Development through Extended Fea...
ML systems have become an essential tool for experts of many domains, data scientists and researchers, allowing them to find answers to many complex business questions...-
bibtex
The resource: 'Bibtex entry' is not accessible as guest user. You must login to access it!
-
The resource: 'Link to the paper' is not accessible as guest user. You must login to access it!
-
bibtex
-
Python library for direct and indirect discrimination prevention in data mining
This python library implements the discrimination discovery and prevention method proposed in the paper: “A methodology for direct and indirect discrimination prevention in...-
GitHub
The resource: 'Link to library' is not accessible as guest user. You must login to access it!
-
GitHub
-
Algorithmic Decision Making Based on ML from Big Data. Can Transparency Resto...
Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would... -
Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exem...
We present xspells, a model-agnostic local approach for explaining the decisions of a black box model for sentiment classification of short texts. The explanations provided...-
HTML
The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
-
HTML
-
Multi layered Explanations from Algorithmic Impact Assessments in the GDPR
Impact assessments have received particular attention on both sides of the Atlantic as a tool for implementing algorithmic accountability. The aim of this paper is to address... -
Machine Learning Explainability Via Microaggregation and Shallow Decision Trees
Artificial intelligence (AI) is being deployed in missions that are increasingly critical for human life. To build trust in AI and avoid an algorithm-based authoritarian... -
Explanation of Deep Models with Limited Interaction for Trade Secret and Priv...
An ever-increasing number of decisions affecting our lives are made by algorithms. For this reason, algorithmic transparency is becoming a pressing need: automated decisions... -
Grounds for Trust. Essential Epistemic Opacity and Computational Reliabilism
Several philosophical issues in connection with computer simulations rely on the assumption that results of simulations are trustworthy. Examples of these include the debate... -
Fair Prediction with Disparate Impact A Study of Bias in Recidivism Predictio...
Recidivism prediction instruments (RPIs) provide decision-makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time.... -
Solving the Black Box Problem. A Normative Framework for Explainable Artifici...
Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial... -
Toward Accountable Discrimination Aware Data Mining
"Big Data" and data-mined inferences are affecting more and more of our lives, and concerns about their possible discriminatory effects are growing. Methods for... -
Machine Learning Explainability Through Comprehensible Decision Trees
The role of decisions made by machine learning algorithms in our lives is ever increasing. In reaction to this phenomenon, the European General Data Protection Regulation... -
How the machine thinks. Understanding opacity in machine learning algorithms
This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud... -
Seeing without knowing. Limitations of transparency and its application to al...
Models for understanding and holding systems accountable have long rested upon ideals and logics of transparency. Being able to see a system is sometimes equated with being able... -
Label flipping attacks in Federated Learning
The following experiments showcase Federated Learning using Scikit-learn.-
ipynb
The resource: 'FederatedLearning-sklearn' is not accessible as guest user. You must login to access it!
-
ipynb
-
Private Deliverable D2.3 Report on WP2 activities
-
Bias in algorithmic filtering and personalization
Online information intermediaries such as Facebook and Google are slowly replacing traditional media channels thereby partly becoming the gatekeepers of our society. To deal...