TR2024-141

Analyzing Inference Privacy Risks Through Gradients In Machine Learning


    •  Li, Z., Lowy, A., Liu, J., Koike-Akino, T., Parsons, K., Malin, B., Wang, Y., "Analyzing Inference Privacy Risks Through Gradients In Machine Learning", ACM Conference on Computer and Communications Security (CCS), October 2024.
      BibTeX TR2024-141 PDF
      • @inproceedings{Li2024oct,
      • author = {Li, Zhuohang and Lowy, Andrew and Liu, Jing and Koike-Akino, Toshiaki and Parsons, Kieran and Malin, Bradley and Wang, Ye}},
      • title = {Analyzing Inference Privacy Risks Through Gradients In Machine Learning},
      • booktitle = {ACM Conference on Computer and Communications Security (CCS)},
      • year = 2024,
      • month = oct,
      • url = {https://www.merl.com/publications/TR2024-141}
      • }
  • MERL Contacts:
  • Research Areas:

    Artificial Intelligence, Machine Learning

Abstract:

In distributed learning settings, models are iteratively updated with shared gradients computed from potentially sensitive user data. While previous work has studied various privacy risks of sharing gradients, our paper aims to provide a systematic approach to analyze private information leakage from gradients. We present a unified game-based framework that encompasses a broad range of attacks including attribute, property, distributional, and user disclosures. We investigate how different uncertainties of the adversary affect their inferential power via extensive experiments on five datasets across various data modalities. Our results demonstrate the inefficacy of solely relying on data aggregation to achieve privacy against inference attacks in distributed learning. We further evaluate five types of defenses, namely, gradient pruning, signed gradient descent, adversarial perturbations, variational information bottleneck, and differential privacy, under both static and adaptive adversary settings. We provide an information-theoretic view for analyzing the effectiveness of these defenses against inference from gradients. Finally, we introduce a method for auditing attribute inference privacy, improving the empirical estimation of worst-case privacy through crafting adversarial canary records.

 

  • Related Publication

  •  Li, Z., Lowy, A., Liu, J., Koike-Akino, T., Parsons, K., Malin, B., Wang, Y., "Analyzing Inference Privacy Risks Through Gradients in Machine Learning", arXiv, August 2024.
    BibTeX arXiv
    • @article{Li2024aug,
    • author = {Li, Zhuohang and Lowy, Andrew and Liu, Jing and Koike-Akino, Toshiaki and Parsons, Kieran and Malin, Bradley and Wang, Ye}},
    • title = {Analyzing Inference Privacy Risks Through Gradients in Machine Learning},
    • journal = {arXiv},
    • year = 2024,
    • month = aug,
    • url = {https://arxiv.org/abs/2408.16913}
    • }