approved
Label flipping attacks in Federated Learning
Tags
Data and Resources
To access the resources you must log in
-
FederatedLearning-sklearnipynb
Jupyter notebook showing the setting of a federated learning loop using...
The resource: 'FederatedLearning-sklearn' is not accessible as guest user. You must login to access it!
Item URL
https://data.d4science.org/ctlg/ResourceCatalogue/label_flipping_attacks_in_federated_learning |
|
Additional Info
Field | Value |
---|---|
Detailed description | In this experiment, we showcase a federated training loop using scikit-learn to classify MNIST.Additionally, we show a poisoning attack, namely the label flipping attack, in which attackers change the labels of some objective class to a different class in order to make the global model misclassify one or more classes.Finally, we show some defense mechanisms based on the analysis of user-contributed updates, including a distance-based detection metric, Krum, and median aggregation. |
Ethical issues | None identified, we used the public MNIST dataset for tests. |
Group | Ethics and Legality |
Involved Institutions | Universitat Rovira i Virgili |
Involved People | Domingo-Ferrer, Josep, [email protected], orcid.org/0000-0001-7213-4962 |
Involved People | Blanco-Justicia, Alberto, [email protected], orcid.org/0000-0002-1108-8082 |
State | Complete |
Thematic Cluster | Visual Analytics [VA] |
Thematic Cluster | Privacy Enhancing Technology [PET] |
system:type | Experiment |
Management Info
Field | Value |
---|---|
Author | Blanco Justicia Alberto |
Maintainer | Blanco Justicia Alberto |
Version | 1 |
Last Updated | 22 July 2023, 13:52 (CEST) |
Created | 24 June 2023, 01:09 (CEST) |