Federated learning and differential privacy: Machine learning and deep learning for biomedical image data classification
Sobia Wassan, Liudajun, Han Ying, Hu Dongyan, Pan FeiDigit Health. 2025 Sep 11:11:20552076251358531. doi: 10.1177/20552076251358531. eCollection 2025 Jan-Dec.
Abstract
Background: The integration of differential privacy and federated learning in healthcare is key for maintaining patient confidentiality while ensuring accurate predictive modeling. With increasing concerns about privacy, it is essential to explore methods that protect data privacy without compromising model performance.
Objective: This study evaluates the effectiveness of feedforward neural networks (FNNs), Gaussian processes (GPs), and a subset of deep learning neural networks (MLP) in classifying biomedical image data, incorporating federated learning to enhance privacy preservation.
Method: We implemented FNN, GP, and MLP models using federated learning and differential privacy techniques. Models were evaluated based on training and validation accuracy, correlation coefficients, mean absolute error (MAE), root mean squared error (RMSE), and relative errors, including relative absolute error (RAE) and relative root squared error (RRSE).
Results: The FNN achieved 86.49% training accuracy and 82.08% overall accuracy but showed potential overfitting with 68.75% validation accuracy. The GP model had a correlation coefficient of 0.9741, a MAE of 108.38, and a RMSE of 173.49. The DNN outperformed the other models with a correlation coefficient of 0.9980, a MAE of 36.80, and a RMSE of 51.01. Federated learning improved privacy while maintaining model performance.
Conclusion: Federated learning with differential privacy offers a promising solution for secure and accurate biomedical image classification, supporting privacy-preserving machine learning in medical diagnostics without compromising performance.