The top portion of the campus entrance gate showing IISER Pune logo

Model Poisoning Attacks in Federated Learning

By Arjun Nitin Bhagoji, University of Chicago

To be announced 


While machine learning (ML)-based artificial intelligence (AI) systems are increasingly deployed in safety-critical settings, they continue to remain unreliable under adverse conditions that violate underlying statistical assumptions, leading to critical failures. These conditions can arise in both the training and test phases of ML pipelines. In this talk, I focus on attacks in the training phase, known as poisoning. 

I will first introduce federated learning, a recent paradigm in distributed learning where agents collaborate with a server to jointly learn models. Then, I show how strong poisoning attacks are possible via a small number of compromised agents modifying model parameters via optimized updates to ensure desired data is misclassified by the global model. Experimentally, the proposed model poisoning attack is highly effective while bypassing standard detection methods. Defending against model poisoning continues to be an active area of research and I will conclude by discussing some recent approaches.