International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

11 May 2023

Antoine Joux
ePrint Report ePrint Report
In this paper, we take inspiration from an invited talk presented at CBCrypto'23 to design identification protocols and signatures schemes from group actions using the MPC-in-the-head paradigm.
Expand
Joel Gärtner
ePrint Report ePrint Report
Lattice-based cryptosystems are some of the primary post-quantum secure alternatives to the asymmetric cryptography that is used today. These lattice-based cryptosystems typically rely on the hardness of some version of either the NTRU or the LWE problem. In this paper, we present the NTWE problem, a natural combination of the NTRU and LWE problems, and construct a new lattice-based cryptosystem based on the hardness of the NTWE problem. As with the NTRU and LWE problems, the NTWE problem naturally corresponds to a problem in a $q$-ary lattice. This allows the hardness of the NTWE problem to be estimated in the same way as it is estimated for the LWE and NTRU problems. We parametrize our cryptosystem from such a hardness estimate and the resulting scheme has performance that is competitive with that of typical lattice-based schemes. In some sense, our NTWE-based cryptosystem can be seen as a less structured and more compact version of a cryptosystem based on the module-NTRU problem. Thus, parameters for our cryptosystem can be selected with the flexibility of a module-LWE-based scheme, while other properties of our system are more similar to those in an NTRU-based system.
Expand
Reo Eriguchi
ePrint Report ePrint Report
Bottleneck complexity is an efficiency measure of secure multiparty computation (MPC) introduced by Boyle et al. (ICALP 2018) to achieve load-balancing. Roughly speaking, it is defined as the maximum communication complexity required by any player within the protocol execution. Since it is impossible to achieve sublinear bottleneck complexity in the number of players $n$ for all functions, a prior work constructed MPC protocols with low bottleneck complexity for specific functions including the AND function and general symmetric functions. However, the previous protocol for a symmetric function needs to assume a computational primitive of garbled circuits. Its unconditionally secure variant has exponentially large bottleneck complexity in the depth of an arithmetic formula computing the function, which limits the class of symmetric functions the protocol can compute with sublinear bottleneck complexity in $n$. In this paper, we propose for the first time unconditionally secure MPC protocols computing any symmetric function with sublinear bottleneck complexity in $n$. Our first protocol is an application of the one-time truth-table protocol by Ishai et al. (TCC 2013). We devise a novel technique to express the truth-table as an array of two or higher dimensions and obtain two other protocols with better trade-offs. We also propose an unconditionally secure protocol with lower bottleneck complexity tailored to the AND function. It avoids pseudorandom functions used by the previous protocol, preserving bottleneck complexity up to a logarithmic factor in $n$. As an application, we construct an unconditionally secure protocol for private set intersection (PSI), which computes the intersection of players' private sets. This is the first PSI protocol with sublinear bottleneck complexity in $n$ and to the best of our knowledge, there has been no such protocol even under cryptographic assumptions.
Expand
Tiago Martins, João Farinha
ePrint Report ePrint Report
This technical paper explores two solutions for arithmetization of computational integrity statements in STARKs, namely the algebraic intermediate representation, AIR, and is preprocessed variant, PAIR. The work then focuses on their soundness implications for Reed-Solomon proximity testing. It proceeds by presenting a comparative study of these methods, providing their theoretical foundations and deriving the degree bounds for low-degree proximity testing. The study shows that using PAIR increases the degree bound for Reed-Solomon proximity testing, which affects its soundness and complexity, but also explores the possibility of reducing the degree bound with multiple selector columns. The paper concludes that, while PAIR might simplify constraint enforcement, it can be easily translated to AIR, and system testing with benchmarks is necessary to determine the application-specific superiority of either method. This work should provide insight into the strengths and limitations of each method, helping researchers and practitioners in the field of STARKs make informed design choices.
Expand
Andrea Basso, Luciano Maino, Giacomo Pope
ePrint Report ePrint Report
We introduce FESTA, an efficient isogeny-based public-key encryption (PKE) protocol based on a constructive application of the SIDH attacks. At its core, FESTA is based on a novel trapdoor function, which uses an improved version of the techniques proposed in the SIDH attacks to develop a trapdoor mechanism. Using standard transformations, we construct an efficient PKE that is IND-CCA secure in the QROM. Additionally, using a different transformation, we obtain the first isogeny-based PKE that is IND-CCA secure in the standard model. Lastly, we propose a method to efficiently find parameters for FESTA, and we develop a proof-of-concept implementation of the protocol. We expect FESTA to offer practical performance that is competitive with existing isogeny-based constructions.
Expand
Tianrui Wang, Anyu Wang, Xiaoyun Wang
ePrint Report ePrint Report
Code-based cryptography has received a lot of attention recently because it is considered secure under quantum computing. Among them, the QC-MDPC based scheme is one of the most promising due to its excellent performance. QC-MDPC based scheme is usually subject to a small rate of decryption failure, which can leak information about the secret key. This raises two crucial problems: how to accurately estimate the decryption failure rate and how to use the failure information to recover the secret key. However, the two problems are challenging due to the difficulty of geometrically characterizing the bit-flipping decoder employed in QC-MDPC, such as using decoding radius.

In this work, we introduce the gathering property and show that it is strongly connected with the decryption failure rate of QC-MDPC. Based on the gathering property, we present two results for QC-MDPC based schemes. The first is a new construction of weak keys obtained by extending the keys that have gathering property via ring isomorphism. For the set of weak keys, we present a rigorous analysis of the probability, as well as experimental simulation of the decryption failure rates. Considering BIKE's parameter set targeting $128$-bit security, our result eventually indicates that the average decryption failure rate is lower bounded by $DFR_{avg} \ge 2^{-122.57}$. The second is a key recovery attack against CCA secure QC-MDPC schemes using decryption failures in a multi-target setting. By decrypting ciphertexts with errors satisfying the gathering property, we show that a single decryption failure can be used to identify whether a target's secret key satisfies the gathering property. Then using the gathering property as extra information, we present a modified information set decoding algorithm that efficiently retrieves the target's secret key. For BIKE's parameter set targeting $128$-bit security, a key recovery attack with complexity $2^{119.88}$ can be expected by using extrapolated decryption failure rates.
Expand
Zhengjun Cao, Lihua Liu
ePrint Report ePrint Report
We show that the data de-duplication scheme [Internet of Things, 2021(14): 100376] is flawed. (1) There are some inconsistent notations and false equations, which should be corrected. (2) The scheme fails to keep user anonymity, not as claimed. (3) The scheme could fail to keep data confidentiality.
Expand
Yuyang Sang, Ning Luo, Samuel Judson, Ben Chaimberg, Timos Antonopoulos, Xiao Wang, Ruzica Piskac, Zhong Shao
ePrint Report ePrint Report
A zero-knowledge proof (ZKP) is a powerful cryptographic primitive used in many decentralized or privacy-focused applications. However, the high overhead of ZKPs can restrict their practical applicability. We design a programming language, Ou, aimed at easing the programmer's burden when writing efficient ZKPs, and a compiler framework, Lian, that automates the analysis and distribution of statements to a computing cluster. Lian uses programming language semantics, formal methods, and combinatorial optimization to automatically partition an Ou program into efficiently sized chunks for parallel ZK-proving and/or verification.

We contribute: • A front-end language where users can write proof statements as imperative programs in a familiar syntax; • A compiler architecture and implementation that automatically analyzes the program and compiles it into an optimized IR that can be lifted to a variety of ZKP constructions; and • A cutting algorithm, based on Pseudo-Boolean optimization and Integer Linear Programming, that reorders instructions and then partitions the program into efficiently sized chunks for parallel evaluation and efficient state reconciliation.
Expand
Bolton Bailey, Andrew Miller
ePrint Report ePrint Report
Succinct Non-interactive Arguments of Knowledge (SNARKs) have seen interest and development from the cryptographic community over recent years, and there are now constructions with very small proof size designed to work well in practice. A SNARK protocol can only be widely accepted as secure, however, if a rigorous proof of its security properties has been vetted by the community. Even then, it is sometimes the case that these security proofs are flawed, and it is then necessary for further research to identify these flaws and correct the record.

To increase the rigor of these proofs, we turn to formal methods. Focusing on the soundness aspect of a widespread class of SNARKs, we formalize proofs for six different constructions, including the well-known Groth '16. Our codebase is written in the Lean 3 theorem proving language, and uses a variety of techniques to simplify and automate these proofs as much as possible.
Expand
Joshua Davis, Dr. Rashid Minhas, Michelle Casario
ePrint Report ePrint Report
Abstract—Corporate sexual harassment policies often prioritize liability mitigation over the creation of a corporate culture free of harassment. Victims of sexual harassment are often required to report claims individually to HR. This can create an environment of self-censorship when employees feel that they cannot trust HR to act as an unbiased mediator. This problem is compounded when corporations have a culture that is tolerant of certain types of harassment. Forcing employees to report incidents to HR does nothing to address employees’ fear of bias and uncertainty. This paper presents TandaPay, a decentralized grievance reporting protocol designed to address sexual harassment. TandaPay empowers whistleblowing communities to collectively approve their own harassment claims. TandaPay reduces self-censorship by allowing employees to take ownership of the reporting process, as employees no longer need to rely on HR to act as an intermediary. The protocol employs a novel method of using financial incentives to guard against collusion. This provides corporations with a guarantee that employees can only approve valid claims. Using TandaPay, corporations can give employees greater autonomy with the goal of minimizing self-censorship. This increases the reporting of incidents, enabling workers to change the corporate culture to one of respect and accountability.
Expand
Thomas Schneider, Hossein Yalame, Michael Yonli
ePrint Report ePrint Report
This paper presents Griffin, an extension of the mixed-scheme single-key homomorphic encryption framework Pegasus (Lu et al., IEEE S&P’21) to a Multi-Key Homomorphic Encryption (MKHE) scheme with applications to secure computation. MKHE is a generalized notion of Homomorphic Encryption (HE) that allows for operations on ciphertexts encrypted under different keys. However, an efficient approach to evaluate both polynomial and non-polynomial functions on encrypted data in MKHE has not yet been developed, hindering the deployment of HE to real-life applications. Griffin addresses this challenge by introducing a method for transforming between MKHE ciphertexts of different schemes. The practicality of Griffin is demonstrated through benchmarks with multiple applications, including the sorting of sixty four 45-bit fixed point numbers with a precision of 7 bits in 21 minutes, and evaluating arbitrary functions with a one-time setup communication of 1.4 GB per party and 2.34 MB per ciphertext. Moreover, Griffin could compute the maximum of two numbers in 3.2 seconds, a 2× improvement over existing MKHE approaches that rely on a single scheme.
Expand
Sonja Bruckner, Sebastian Ramacher, Christoph Striecks
ePrint Report ePrint Report
End-to-end authenticity in public networks plays a significant role. Namely, without authenticity, the adversary might be able to retrieve even confidential information straight away by impersonating others. Proposed solutions to establish an authenticated channel cover pre-shared key-based, password-based, and certificate-based techniques. To add confidentiality to an authenticated channel, authenticated key exchange (AKE) protocols usually have one of the three solutions built in. As an amplification, hybrid AKE (HAKE) approaches are getting more popular nowadays and were presented in several flavors to incorporate classical, post-quantum, or quantum-key-distribution components. The main benefit is redundancy, i.e., if some of the components fail, the primitive still yields a confidential and authenticated channel. However, current HAKE instantiations either rely on pre-shared keys (which yields inefficient end-to-end authenticity) or only support one or two of the three above components (resulting in reduced redundancy and flexibility).

In this work, we present an extension of a modular HAKE framework due to Dowling, Brandt Hansen, and Paterson (PQCrypto'20) that does not suffer from the above constraints. While their instantiation, dubbed Muckle, requires pre-shared keys (and hence yields inefficient end-to-end authenticity), our extended instantiation called Muckle+ utilizes post-quantum digital signatures. While replacing pre-shared keys with digital signatures is rather straightforward in general, this turned out to be surprisingly non-trivial when applied to HAKE frameworks (resulting in a significant model change with adapted proof techniques).
Expand
Yaniv Ben-Itzhak, Helen Möllering, Benny Pinkas, Thomas Schneider, Ajith Suresh, Oleksandr Tkachenko, Shay Vargaftik, Christian Weinert, Hossein Yalame, Avishay Yanai
ePrint Report ePrint Report
Secure aggregation is commonly used in federated learning (FL) to alleviate privacy concerns related to the central aggregator seeing all parameter updates in the clear. Unfortunately, most existing secure aggregation schemes ignore two critical orthogonal research directions that aim to (i) significantly reduce client-server communication and~(ii) mitigate the impact of malicious clients. However, both of these additional properties are essential to facilitate cross-device FL with thousands or even millions of (mobile) participants.

In this paper, we unite both research directions by introducing ScionFL, the first secure aggregation framework for FL that operates efficiently on quantized inputs and simultaneously provides robustness against malicious clients. Our framework leverages (novel) multi-party computation (MPC) techniques and supports multiple linear (1-bit) quantization schemes, including ones that utilize the randomized Hadamard transform and Kashin's representation.

Our theoretical results are supported by extensive evaluations. We show that with no overhead for clients and moderate overhead on the server side compared to transferring and processing quantized updates in plaintext, we obtain comparable accuracy for standard FL benchmarks. Additionally, we demonstrate the robustness of our framework against state-of-the-art poisoning attacks.
Expand
Marc Fischlin
ePrint Report ePrint Report
We show how to embed a covert key exchange sub protocol within a regular TLS 1.3 execution, generating a stealth key in addition to the regular session keys. The idea, which has appeared in the literature before, is to use the exchanged nonces to transport another key value. Our contribution is to give a rigorous model and analysis of the security of such embedded key exchanges, requiring that the stealth key remains secure even if the regular key is under adversarial control. Specifically for our stealth version of the TLS 1.3 protocol we show that this extra key is secure in this setting under the common assumptions about the TLS protocol.

As an application of stealth key exchange we discuss sanitizable channel protocols, where a designated party can partly access and modify payload data in a channel protocol. This may be, for instance, an intrusion detection system monitoring the incoming traffic for malicious content and putting suspicious parts in quarantine. The noteworthy feature, inherited from the stealth key exchange part, is that the sender and receiver can use the extra key to still communicate securely and covertly within the sanitizable channel, e.g., by pre-encrypting confidential parts and making only dedicated parts available to the sanitizer. We discuss how such sanitizable channels can be implemented with authenticated encryption schemes like GCM or ChaChaPoly. In combination with our stealth key exchange protocol, we thus derive a full-fledged sanitizable connection protocol, including key establishment, which perfectly complies with regular TLS 1.3 traffic on the network level. We also assess the potential effectiveness of the approach for the intrusion detection system Snort.
Expand
Geoffroy Couteau, Clément Ducros
ePrint Report ePrint Report
Pseudorandom correlation functions (PCF), introduced in the work of (Boyle et al., FOCS 2020), allow two parties to locally gen- erate, from short correlated keys, a near-unbounded amount of pseu- dorandom samples from a target correlation. PCF is an extremely ap- pealing primitive in secure computation, where they allow to confine all preprocessing phases of all future computations two parties could want to execute to a single short interaction with low communication and computation, followed solely by offline computations. Beyond in- troducing the notion, Boyle et al. gave a candidate construction, using a new variable-density variant of the learning parity with noise (LPN) assumption. Then, to provide support for this new assumption, the au- thors showed that it provably resists a large class of linear attacks, which captures in particular all known attacks on LPN. In this work, we revisit the analysis of the VDLPN assumption. We make two key contributions: – First, we observe that the analysis of Boyle et al is purely asymp- totic: they do not lead to any concrete and efficient PCF instanti- ation within the bounds that offer security guarantees. To improve this state of affairs, we combine a new variant of a VDLPN assump- tion with an entirely new, much tighter security analysis, which we further tighten using extensive computer simulations to optimize pa- rameters. This way, we manage to obtain for the first time a set of provable usable parameters (under a simple combinatorial conjec- ture which is easy to verify experimentally), leading to a concretely efficient PCF resisting all linear tests. – Second, we identify a flaw in the security analysis of Boyle et al., which invalidates their proof that VDLPN resists linear attacks. Us- ing several new non-trivial arguments, we repair the proof and fully demonstrate that VDLPN resists linear attack; our new analysis is more involved than the original (flawed) analysis. Our parameters set leads to PCFs with keys around 3MB allowing ∼ 500 evaluations per second on one core of a standard laptop for 110 bits of security; these numbers can be improved to 350kB keys and ∼ 3950 evaluations/s using a more aggressive all-prefix variant. All numbers are quite tight: only within a factor 3 of the best bounds one could heuristically hope for.
Expand
Michael Brand, Hamish Ivey-Law, Tania Churchill
ePrint Report ePrint Report
Information sharing between financial institutions can uncover complex financial crimes such as money laundering and fraud. However, such information sharing is often not possible due to privacy and commercial considerations, and criminals can exploit this intelligence gap in order to hide their activities by distributing them between institutions, a form of the practice known as ``layering''.

We describe an algorithm that allows financial intelligence analysts to trace the flow of funds in suspicious transactions across financial institutions, without this impinging on the privacy of uninvolved individuals and without breaching the tipping off offence provisions between financial institutions. The algorithm is lightweight, allowing it to work even at nation-scale, as well as for it to be used as a building-block in the construction of more sophisticated algorithms for the detection of complex crime typologies within the financial data. We prove the algorithm's scalability by timing measurements done over a full-sized deployment.
Expand
Wei Ren
ePrint Report ePrint Report
The main results in the paper are as follows: (1) We randomly select an extremely large integer and verify whether it can return to 1. The largest one has been verified has length of 6000000 bits, which is overwhelmingly much larger than currently known and verified, e.g., 128 bits, and its Collatz computation sequence consists of 28911397 `I' and `O', only by an ordinary laptop. (2) We propose an dedicated algorithm that can compute 3x+1 for extremely large integers in million bit scale, by replacing multiplication with bit addition, and further only by logical condition judgement. (3) We discovery that the ratio - the count of `O' over the count of `I' in computation sequence goes to 1 asymptotically with the growth of starting integers. (4) We further discover that once the length of starting integer is sufficient large, e.g., 500000 bits, the corresponding computation sequence (in which `I' is replaced with 1 and `O' is replaced with 0), presents sufficient randomness as a bit sequence. We firstly obtain the computation sequence of randomly selected integer with L bit length, where L is 500000, 1000000, 2000000, 3000000, 4000000, 5000000, 6000000, by our proposed algorithm for extremely large integers. We evaluate the randomness of all computation sequences by both NIST SP 800-22 and GM/T 0005-2021. All sequences can pass the tests, and especially, the larger the better. (5) We thus propose an algorithm for random bit sequence generator by only using logical judgement (e.g., logic gates) and less than 100 lines in ANSI C. The throughput of the generator is about 625.693 bits/s over an ordinary laptop with Intel Core i7 CPU (1.8GHz).
Expand

10 May 2023

Robert Bosch GmbH, Renningen, Germany
Job Posting Job Posting
In a few years from now, a myriad of sensors and services will be deeply embedded into our environment creating an adaptive and smart fabric of our lives and businesses. They will create a data footprint of TBs each year, a disruptive development called the Big Bang Data. This data will contribute to our ever-growing personal digital footprint. However, this footprint will not be under our control but will be owned and used by multi-national companies to drive their businesses in a new era called the Data Economy. This development erodes the right for privacy and will result in an increasing mismatch between people's desire for being in control over their data in sensitive areas and a reality where this data is scattered, unmanageable, and not under the user's control anymore. At some point in time, this will impact people's willingness to disclose their data.

To tackle these challenges Bosch Research develops solutions to store and process data securely. Especially, privacy-preserving computation techniques are relevant to provide protection of sensitive data and support secure computation on data. This includes homomorphic encryption and secure multi-party computation, but these methods are costly. Hence, these methods have to be improved and new methods need to be developed which protect data confidentiality and support efficient computation.

Thus, we are looking for a highly motivated and innovative team player who is eager to learn new things and is passionate about applied security/privacy research

Candidates must hold a Ph.D. degree in cryptography, IT security, or a related field. Preference will be given to candidates with a strong publication record. The review of applications starts immediately until the position is filled.

Closing date for applications:

Contact: https://www.bosch.de/en/career/job/REF195129W-research-engineer-for-security-and-privacy-f-m-div/

Expand

08 May 2023

Singapore, Singapore, 1 July - 5 July 2024
Event Calendar Event Calendar
Event date: 1 July to 5 July 2024
Submission deadline: 21 August 2023
Notification: 14 November 2023
Expand
Kwok-Yan Lam, Xianhui Lu, Linru Zhang, Xiangning Wang, Huaxiong Wang, Si Qi Goh
ePrint Report ePrint Report
AI-as-a-Service has emerged as an important trend for supporting the growth of the digital economy. Digital service providers make use of their vast amount of user data to train AI models (such as image recognitions, financial modelling and pandemic modelling etc.) and offer them as a service on the cloud. While there are convincing advantages for using such third-party models, the fact that users need to upload their data to the cloud is bound to raise serious privacy concerns, especially in the face of increasingly stringent privacy regulations and legislations. To promote the adoption of AI-as-a-Service while addressing the privacy issues, we propose a practical approach for constructing privacy-enhanced neural networks by designing an efficient implementation of fully homomorphic encryption. With this approach, an existing neural network can be converted to process FHE-encrypted data and produce encrypted output which are only accessible by the model users, and more importantly, within an operationally acceptable time (e.g. within 1 second for facial recognition in typical border control systems). Experimental results show that in many practical tasks such as facial recognition, text classification and so on, we obtained the state-of-the-art inference accuracy in less than one second on a 16 cores CPU.
Expand
◄ Previous Next ►