International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News item: 23 January 2023

Shalini Banerjee, Steven D. Galbraith, Giovanni Russello
ePrint Report ePrint Report
The use of data as a product and service has given momentum to the extensive uptake of complex machine learning algorithms that focus on performing prediction with popular tree-based methods such as decision trees classifiers. With increasing adoption over a wide array of sensitive applications, a significant need to protect the confidentiality of the classifier model and user data is identified. The existing literature safeguards them using interactive solutions based on expensive cryptographic approaches, where an encrypted classifier model interacts with the encrypted queries and forwards the encrypted classification to the user. Adding to that, the state-of-art protocols for protecting the privacy of the model do not contain model-extraction attacks.

We design an efficient virtual black-box obfuscator for binary decision trees and use the random oracle paradigm to analyze the security of our construction. To thwart model-extraction attacks, we restrict to evasive decision trees, as black-box access to the classifier does not allow a PPT adversary to extract the model. While doing so, we present an encoder for hiding parameters in an interval-membership function. Our exclusive goal behind designing the obfuscator is that, not only will the solution increase the class of functions that has cryptographically secure obfuscators, but also address the open problem of non-interactive prediction in privacy-preserving classification using computationally inexpensive cryptographic hash functions.
Expand

Additional news items may be found on the IACR news page.