International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News item: 26 September 2022

Yun Lu, Yu Wei, Malik Magdon-Ismail, Vassilis Zikas
ePrint Report ePrint Report
Differential Privacy (DP) is one of the gold standards of privacy. Nonetheless, when one is interested in mechanisms with theoretical guarantees, one has to either choose from a relatively small pallet of generic mechanisms, like Laplacian, Gaussian, and exponential, or develop a new, problem-specific mechanism and analyze its privacy. This makes it challenging for non-experts in security to utilize DP for preserving privacy in complex tasks in areas like machine learning, data science, and medicine, which are primary application domains of DP.

Our work aims to address the above limitation. In a nutshell we devise a methodology for domain experts with limited knowledge of security to estimate the (differential) privacy of an arbitrary mechanism. Our Eureka moment is the utilization of a link---which we prove---between the problems of DP parameter-estimation and Bayes optimal classifiers in machine learning, which we believe can be of independent interest. Our estimator methodology uses this link to achieve two desirable properties: (1) it is black-box, i.e., does not require knowledge of the underlying mechanism, and (2) it has a theoretically-proven accuracy, which depends on the underlying classifier used. This allows domain experts to design mechanisms that they conjecture offer certain (differential) privacy guarantees---but maybe cannot prove it---and apply our method to confirm (or disprove) their conjecture.

More concretely, we first prove a new impossibility result, stating that for the classical DP notion there is no black-box poly-time estimator of $(\epsilon,\delta)$-DP. This motivates a natural relaxation of DP, which we term relative DP. Relative DP preserves the desirable properties of DP---composition, robustness to post processing, and robustness to the discovery disclosure of new data---and applies in most practical settings where privacy is desired. We then devise a black-box poly-time $(\epsilon,\delta)$-relative DP estimator---the first to support mechanisms with large output spaces while having tight accuracy bounds. As a result of independent interest, we apply this theory to develop the first approximate estimator for the standard, i.e., non-relative, definition of Distributional Differential Privacy (DDP) -- aka noiseless privacy.

To demonstrate both our theory and its potential for practical impact, we devised a proof-of-concept implementation of our estimator and benchmarked it against well-studied DP mechanisms. We show that in reasonable execution time our estimator can reproduce the tight, analytically computed $\epsilon, \delta$ trade-off of Laplacian and Gaussian mechanisms---to our knowledge, the first black box estimator to do so, and for the Sparse Vector Technique, our outputs are comparable to that of a more specialized state-of-the-art $(\epsilon, \delta)$-DP estimator.
Expand

Additional news items may be found on the IACR news page.