Securing federated learning: A comprehensive survey on privacy challenges and solutions in medical image analysis
Abstract
This paper provides a comprehensive survey of safety and privacy issues related to Federated Learning (FL) in medical image analysis. FL facilitates joint model training among multiple healthcare institutions without compromising sensitive patient information. However, it is susceptible to various threats, including data poisoning, Byzantine attacks, and inference attacks. We systematically categorize these threats and identify two key vulnerable assets: medical data and model parameters. We also explore counter-threat strategies, including cryptographic methods, secure aggregation, perturbation techniques, and various security protocols, to safeguard patient information and ensure model integrity. The role and impact of FL in medical imaging applications are investigated, emphasizing its ability to protect patient privacy while potentially improving diagnostic accuracy. In this survey, we examine challenges such as data heterogeneity, communication overhead, and standardization issues, and discuss potential future research directions to tackle these challenges. This extensive research serves as a comprehensive reference for researchers and practitioners striving to develop secure, privacy-preserving FL systems in healthcare, aiming to enhance the efficacy of medical decision-making.