Federated Adversarial Attacks in ConvNets
Abundant of previous works have indicated that convolutional networks (ConvNets) are vulnerable to adversarial attacks, which only introduce quasi-imperceptible perturbations to the input images but can completely deceive the models. However, such attacks have not been fully investigated under the federated scheme, where a substantial amount of local devices work collaboratively to create a robust model. In this project, we investigate the adversarial attack under the federated learning setting so that the center server aggregates the adversarial examples of local devices without access to users’ data. We show that the aggregated attack examples are more robust and can deceive more models with different architecture and training data. We also implement a system with cloud storage to efficiently simulate the environment and facilitate future researches.
This repository constains the codes of the project of course CS244R at Harvard University. We thank Prof. HT Kung, Marcus Comiter and Sai Qian Zhang for the help and guidance on this project.
The code is developed and tested under the following configurations.
[--num-gpu GPUS]
accordingly)
conda create -n mypython3 python=3.6
source activate activate py3_torch
conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
pip install -r requirements.txt
python system_script/[real_system_with_adaptive_buffer_read]global_server.py
python serveral job of system_script/[real_system_with_adaptive_buffer_read]local_node.py
python system_script/[imbalanced_upload_simulation]global_server.py
python serveral job of system_script/[imbalanced_upload_simulation]local_node.py
This project is licensed under the MIT License - see the LICENSE file for details.