论文标题

更好的界限给出了一百发:通过$ f $ divercences增强的隐私保证

A Better Bound Gives a Hundred Rounds: Enhanced Privacy Guarantees via $f$-Divergences

论文作者

Asoodeh, Shahab, Liao, Jiachun, Calmon, Flavio P., Kosut, Oliver, Sankar, Lalitha

论文摘要

我们得出满足给定级别差异差异隐私(RDP)的机制的最佳差异隐私(DP)参数。我们的结果是基于两个$ f $ divergences的联合范围,这些范围是差异隐私的大概和rényi变化。我们将结果应用于会计师框架,以表征随机梯度下降的隐私保证。与最先进的情况相比,我们的界限可能会导致大约100个随机梯度下降迭代,以培训相同的隐私预算深度学习模型。

We derive the optimal differential privacy (DP) parameters of a mechanism that satisfies a given level of Rényi differential privacy (RDP). Our result is based on the joint range of two $f$-divergences that underlie the approximate and the Rényi variations of differential privacy. We apply our result to the moments accountant framework for characterizing privacy guarantees of stochastic gradient descent. When compared to the state-of-the-art, our bounds may lead to about 100 more stochastic gradient descent iterations for training deep learning models for the same privacy budget.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源