International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

11 May 2023

Marc Fischlin
ePrint Report ePrint Report
We show how to embed a covert key exchange sub protocol within a regular TLS 1.3 execution, generating a stealth key in addition to the regular session keys. The idea, which has appeared in the literature before, is to use the exchanged nonces to transport another key value. Our contribution is to give a rigorous model and analysis of the security of such embedded key exchanges, requiring that the stealth key remains secure even if the regular key is under adversarial control. Specifically for our stealth version of the TLS 1.3 protocol we show that this extra key is secure in this setting under the common assumptions about the TLS protocol.

As an application of stealth key exchange we discuss sanitizable channel protocols, where a designated party can partly access and modify payload data in a channel protocol. This may be, for instance, an intrusion detection system monitoring the incoming traffic for malicious content and putting suspicious parts in quarantine. The noteworthy feature, inherited from the stealth key exchange part, is that the sender and receiver can use the extra key to still communicate securely and covertly within the sanitizable channel, e.g., by pre-encrypting confidential parts and making only dedicated parts available to the sanitizer. We discuss how such sanitizable channels can be implemented with authenticated encryption schemes like GCM or ChaChaPoly. In combination with our stealth key exchange protocol, we thus derive a full-fledged sanitizable connection protocol, including key establishment, which perfectly complies with regular TLS 1.3 traffic on the network level. We also assess the potential effectiveness of the approach for the intrusion detection system Snort.
Expand
Geoffroy Couteau, Clément Ducros
ePrint Report ePrint Report
Pseudorandom correlation functions (PCF), introduced in the work of (Boyle et al., FOCS 2020), allow two parties to locally gen- erate, from short correlated keys, a near-unbounded amount of pseu- dorandom samples from a target correlation. PCF is an extremely ap- pealing primitive in secure computation, where they allow to confine all preprocessing phases of all future computations two parties could want to execute to a single short interaction with low communication and computation, followed solely by offline computations. Beyond in- troducing the notion, Boyle et al. gave a candidate construction, using a new variable-density variant of the learning parity with noise (LPN) assumption. Then, to provide support for this new assumption, the au- thors showed that it provably resists a large class of linear attacks, which captures in particular all known attacks on LPN. In this work, we revisit the analysis of the VDLPN assumption. We make two key contributions: – First, we observe that the analysis of Boyle et al is purely asymp- totic: they do not lead to any concrete and efficient PCF instanti- ation within the bounds that offer security guarantees. To improve this state of affairs, we combine a new variant of a VDLPN assump- tion with an entirely new, much tighter security analysis, which we further tighten using extensive computer simulations to optimize pa- rameters. This way, we manage to obtain for the first time a set of provable usable parameters (under a simple combinatorial conjec- ture which is easy to verify experimentally), leading to a concretely efficient PCF resisting all linear tests. – Second, we identify a flaw in the security analysis of Boyle et al., which invalidates their proof that VDLPN resists linear attacks. Us- ing several new non-trivial arguments, we repair the proof and fully demonstrate that VDLPN resists linear attack; our new analysis is more involved than the original (flawed) analysis. Our parameters set leads to PCFs with keys around 3MB allowing ∼ 500 evaluations per second on one core of a standard laptop for 110 bits of security; these numbers can be improved to 350kB keys and ∼ 3950 evaluations/s using a more aggressive all-prefix variant. All numbers are quite tight: only within a factor 3 of the best bounds one could heuristically hope for.
Expand
Michael Brand, Hamish Ivey-Law, Tania Churchill
ePrint Report ePrint Report
Information sharing between financial institutions can uncover complex financial crimes such as money laundering and fraud. However, such information sharing is often not possible due to privacy and commercial considerations, and criminals can exploit this intelligence gap in order to hide their activities by distributing them between institutions, a form of the practice known as ``layering''.

We describe an algorithm that allows financial intelligence analysts to trace the flow of funds in suspicious transactions across financial institutions, without this impinging on the privacy of uninvolved individuals and without breaching the tipping off offence provisions between financial institutions. The algorithm is lightweight, allowing it to work even at nation-scale, as well as for it to be used as a building-block in the construction of more sophisticated algorithms for the detection of complex crime typologies within the financial data. We prove the algorithm's scalability by timing measurements done over a full-sized deployment.
Expand
Wei Ren
ePrint Report ePrint Report
The main results in the paper are as follows: (1) We randomly select an extremely large integer and verify whether it can return to 1. The largest one has been verified has length of 6000000 bits, which is overwhelmingly much larger than currently known and verified, e.g., 128 bits, and its Collatz computation sequence consists of 28911397 `I' and `O', only by an ordinary laptop. (2) We propose an dedicated algorithm that can compute 3x+1 for extremely large integers in million bit scale, by replacing multiplication with bit addition, and further only by logical condition judgement. (3) We discovery that the ratio - the count of `O' over the count of `I' in computation sequence goes to 1 asymptotically with the growth of starting integers. (4) We further discover that once the length of starting integer is sufficient large, e.g., 500000 bits, the corresponding computation sequence (in which `I' is replaced with 1 and `O' is replaced with 0), presents sufficient randomness as a bit sequence. We firstly obtain the computation sequence of randomly selected integer with L bit length, where L is 500000, 1000000, 2000000, 3000000, 4000000, 5000000, 6000000, by our proposed algorithm for extremely large integers. We evaluate the randomness of all computation sequences by both NIST SP 800-22 and GM/T 0005-2021. All sequences can pass the tests, and especially, the larger the better. (5) We thus propose an algorithm for random bit sequence generator by only using logical judgement (e.g., logic gates) and less than 100 lines in ANSI C. The throughput of the generator is about 625.693 bits/s over an ordinary laptop with Intel Core i7 CPU (1.8GHz).
Expand

10 May 2023

Robert Bosch GmbH, Renningen, Germany
Job Posting Job Posting
In a few years from now, a myriad of sensors and services will be deeply embedded into our environment creating an adaptive and smart fabric of our lives and businesses. They will create a data footprint of TBs each year, a disruptive development called the Big Bang Data. This data will contribute to our ever-growing personal digital footprint. However, this footprint will not be under our control but will be owned and used by multi-national companies to drive their businesses in a new era called the Data Economy. This development erodes the right for privacy and will result in an increasing mismatch between people's desire for being in control over their data in sensitive areas and a reality where this data is scattered, unmanageable, and not under the user's control anymore. At some point in time, this will impact people's willingness to disclose their data.

To tackle these challenges Bosch Research develops solutions to store and process data securely. Especially, privacy-preserving computation techniques are relevant to provide protection of sensitive data and support secure computation on data. This includes homomorphic encryption and secure multi-party computation, but these methods are costly. Hence, these methods have to be improved and new methods need to be developed which protect data confidentiality and support efficient computation.

Thus, we are looking for a highly motivated and innovative team player who is eager to learn new things and is passionate about applied security/privacy research

Candidates must hold a Ph.D. degree in cryptography, IT security, or a related field. Preference will be given to candidates with a strong publication record. The review of applications starts immediately until the position is filled.

Closing date for applications:

Contact: https://www.bosch.de/en/career/job/REF195129W-research-engineer-for-security-and-privacy-f-m-div/

Expand

08 May 2023

Singapore, Singapore, 1 July - 5 July 2024
Event Calendar Event Calendar
Event date: 1 July to 5 July 2024
Submission deadline: 21 August 2023
Notification: 14 November 2023
Expand
Kwok-Yan Lam, Xianhui Lu, Linru Zhang, Xiangning Wang, Huaxiong Wang, Si Qi Goh
ePrint Report ePrint Report
AI-as-a-Service has emerged as an important trend for supporting the growth of the digital economy. Digital service providers make use of their vast amount of user data to train AI models (such as image recognitions, financial modelling and pandemic modelling etc.) and offer them as a service on the cloud. While there are convincing advantages for using such third-party models, the fact that users need to upload their data to the cloud is bound to raise serious privacy concerns, especially in the face of increasingly stringent privacy regulations and legislations. To promote the adoption of AI-as-a-Service while addressing the privacy issues, we propose a practical approach for constructing privacy-enhanced neural networks by designing an efficient implementation of fully homomorphic encryption. With this approach, an existing neural network can be converted to process FHE-encrypted data and produce encrypted output which are only accessible by the model users, and more importantly, within an operationally acceptable time (e.g. within 1 second for facial recognition in typical border control systems). Experimental results show that in many practical tasks such as facial recognition, text classification and so on, we obtained the state-of-the-art inference accuracy in less than one second on a 16 cores CPU.
Expand
Zhengjun Cao, Lihua Liu
ePrint Report ePrint Report
We remark that the key agreement scheme [IEEE Internet Things J., 8(5), 2021, 3801--3811] is flawed. (1) It is insecure against internal attack, because any unauthorized sensing device (not revoked) can retrieve the final session key. (2) It could be insecure against external attack.
Expand
Shihe Ma, Tairong Huang, Anyu Wang, Xiaoyun Wang
ePrint Report ePrint Report
The functional bootstrap in FHEW/TFHE allows for fast table lookups on ciphertexts and is a powerful tool for privacy-preserving computations. However, the functional bootstrap suffers from two limitations: the negacyclic constraint of the lookup table (LUT) and the limited ability to evaluate large-precision LUTs. To overcome the first limitation, several full-domain functional bootstraps (FDFB) have been developed, enabling the evaluation of arbitrary LUTs. Meanwhile, algorithms based on homomorphic digit decomposition have been proposed to address the second limitation. Although these algorithms provide effective solutions, they are yet to be optimized. This paper presents four new FDFB algorithms and two new homomorphic decomposition algorithms that improve the state-of-the-art. Our FDFB algorithms reduce the output noise, thus allowing for more efficient and compact parameter selection. Across all parameter settings, our algorithms reduce the runtime by up to $39.2\%$. Furthermore, our FDFB algorithms introduce an error that can be as small as 1/15 of that introduced by previous algorithms when evaluating continuous functions. Our homomorphic decomposition algorithms also run at 2.0x and 1.5x the speed of prior algorithms. We have implemented and benchmarked all previous FDFB and homomorphic decomposition algorithms and our methods in OpenFHE.
Expand
Jakob Burkhardt, Ivan Damgård, Tore Frederiksen, Satrajit Ghosh, Claudio Orlandi
ePrint Report ePrint Report
Secure distributed generation of RSA moduli (e.g., generating $N=pq$ where none of the parties learns anything about $p$ or $q$) is an important cryptographic task, that is needed both in threshold implementations of RSA-based cryptosystems and in other, advanced cryptographic protocols that assume that all the parties have access to a trusted RSA modulo. In this paper, we provide a novel protocol for secure distributed RSA key generation based on the Miller-Rabin test. Compared with the more commonly used Boneh-Franklin test (which requires many iterations), the Miller-Rabin test has the advantage of providing negligible error after even a single iteration of the test for large enough moduli (e.g., $4096$ bits).

From a technical point of view, our main contribution is a novel divisibility test which allows to perform the primality test in an efficient way, while keeping $p$ and $q$ secret.

Our semi-honest RSA generation protocol uses any underlying secure multiplication protocol in a black-box way, and our protocol can therefore be instantiated in both the honest or dishonest majority setting based on the chosen multiplication protocol. Our semi-honest protocol can be upgraded to protect against active adversaries at low cost using existing compilers. Finally, we provide an experimental evaluation showing that for the honest majority case, our protocol is much faster than Boneh-Franklin.
Expand
Ning Luo, Chenkai Weng, Jaspal Singh, Gefei Tan, Ruzica Piskac, Mariana Raykova
ePrint Report ePrint Report
Motivated by the privacy requirements in network intrusion detection and DNS policy checking, we have developed a suite of protocols and algorithms for regular expression matching with enhanced privacy:

- A new regular expression matching algorithm that is oblivious to the input strings, of which the complexity is only $O(mn)$ where $m$ and $n$ are the length of strings and the regular expression respectively. It is achieved by exploiting the structure of the Thompson nondeterministic automata. - A zero-knowledge proof of regular expression pattern matching in which a prover generates a proof to demonstrate that a public regular expression matches her input string without revealing the string itself. -Two secure-regex protocols that ensure the privacy of both the string and regular expression. The first protocol is based on the oblivious stack and reduces the complexity of the state-of-the-art from $O(mn^2)$ to $O(mn\log n)$. The second protocol relies on the oblivious transfer and performs better empirically when the size of regular expressions is smaller than $2^{12}$.

We also evaluated our protocols in the context of encrypted DNS policy checking and intrusion detection and achieved 4.5X improvements over the state-of-the-art. These results also indicate the practicality of our approach in real-world applications.
Expand
Sylvain Chatel, Christian Mouchet, Ali Utkan Sahin, Apostolos Pyrgelis, Carmela Troncoso, Jean-Pierre Hubaux
ePrint Report ePrint Report
Multiparty fully homomorphic encryption (MFHE) schemes enable multiple parties to efficiently compute functions on their sensitive data while retaining confidentiality. However, existing MFHE schemes guarantee data confidentiality and the correctness of the computation result only against honest-but-curious adversaries. In this work, we provide the first practical construction that enables the verification of MFHE operations in zero-knowledge, protecting MFHE from malicious adversaries. Our solution relies on a combination of lattice-based commitment schemes and proof systems which we adapt to support both modern FHE schemes and their implementation optimizations. We implement our construction in PELTA. Our experimental evaluation shows that PELTA is one to two orders of magnitude faster than existing techniques in the literature.
Expand
Charles Gouert, Vinu Joseph, Steven Dalton, Cedric Augonnet, Michael Garland, Nektarios Georgios Tsoutsos
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) is a cryptographic method that guarantees the privacy and security of user data during computation. FHE algorithms can perform unlimited arithmetic computations directly on encrypted data without decrypting it. Thus, even when processed by untrusted systems, confidential data is never exposed. In this work, we develop new techniques for accelerated encrypted execution and demonstrate the significant performance advantages of our approach. Our current focus is the Fully Homomorphic Encryption over the Torus (CGGI) scheme, which is a current state-of-the-art method for evaluating arbitrary functions in the encrypted domain. CGGI represents a computation as a graph of homomorphic logic gates and each individual bit of the plaintext is transformed into a polynomial in the encrypted domain. Arithmetic on such data becomes very expensive: operations on bits become operations on entire polynomials. Therefore, evaluating even relatively simple nonlinear functions, such as a sigmoid, can take thousands of seconds on a single CPU thread. Using our novel framework for end-to-end accelerated encrypted execution called ArctyrEX, developers with no knowledge of complex FHE libraries can simply describe their computation as a C program that is evaluated over 40x faster on an NVIDIA DGX A100 and 6x faster with a single A100 relative to a 256-threaded CPU baseline.
Expand
Luciano Maino, Chloe Martindale, Lorenz Panny, Giacomo Pope, Benjamin Wesolowski
ePrint Report ePrint Report
We present an attack on SIDH utilising isogenies between polarized products of two supersingular elliptic curves. In the case of arbitrary starting curve, our attack (discovered independently from [CD22]) has subexponential complexity, thus significantly reducing the security of SIDH and SIKE. When the endomorphism ring of the starting curve is known, our attack (here derived from [CD22]) has polynomial-time complexity assuming the generalised Riemann hypothesis. Our attack applies to any isogeny-based cryptosystem that publishes the images of points under the secret isogeny, for example SÉTA and B-SIDH. It does not apply to CSIDH, CSI-FiSh, or SQISign.
Expand
Lena Heimberger, Fredrik Meisingseth, Christian Rechberger
ePrint Report ePrint Report
Oblivious Pseudorandom Functions are an elementary building block in cryptographic and privacy-preserving applications. However, while there are numerous pre-quantum secure OPRF constructions, few options exist in a post-quantum secure setting. Isogeny group actions and the associated low bandwidth seem like a promising candidate to construct a quantum-resistant OPRF. While there have been relevant attacks on isogeny-related hardness assumptions, the commutative CSIDH is unaffected. In this work, we propose OPUS, a novel OPRF with small communication complexity, requiring only CSIDH as the security assumption. Our results also revisit the Naor-Reingold OPRF from CSIDH and show how to efficiently compute offline evaluations. Additionally, we analyze a previous proposal of a CSIDH-based instantiation of the Naor-Reingold construction. We report several issues with the straightforward instantiation of the protocol and propose mitigations to address those shortcomings. Our mitigations require additional hardness assumptions and more expensive computations but result in a competitive protocol with low communication complexity and few rounds. Our comparison against the state of the art shows that OPUS and the repaired, generic construction are competitive with other proposals in terms of speed and communication size. More concretely, OPUS achieves almost two orders of magnitude less communication overhead compared to the next-best lattice-based OPRF at the cost of higher latency and higher computational cost.
Expand
Shahram Rasoolzadeh
ePrint Report ePrint Report
We apply Siegenthaler's construction, along with several techniques, to classify all $(n-4)$-resilient Boolean functions with $n$ variables, for all values of $n \geq 4$, up to extended variable-permutation equivalence. We show that, up to this equivalence, there are only 761 functions for any $n$ larger than or equal to 10, and for smaller values of $n$, i.e., for $n$ increasing from 4 to 9, there are 58, 256, 578, 720, 754, and 760 functions, respectively. Furthermore, we classify all 1-resilient 6-variable Boolean functions and show that there are 1035596784 such functions up to extended variable-permutation equivalence.
Expand
Jean Liénardy
ePrint Report ePrint Report
In this note, we identify a minor flaw in the design of the XOCB mode, presented at Eurocrypt '23. This vulnerability enables trivial tag forgeries and arises from the padding applied to messages. We examine the security proof and pinpoint the presence of the flaw within it. Furthermore, we propose a simple fix for this issue, drawing upon the features of OCB3, and discuss the implications of this modification on the proof of security.
Expand
Gustavo Banegas, Florian Caullery
ePrint Report ePrint Report
Hash-based signatures are a type of Digital Signature Algorithms that are positioned as one of the most solid quantum-resistant constructions. As an example SPHINCS+, has been selected as a standard during the NIST Post-Quantum Cryptography competition. However, hash-based signatures suffer from two main drawbacks: signature size and slow signing process. In this work, we give a solution to the latter when it is used in a mobile device. We take advantage of the fact that hash-based signatures are highly parallelizable. More precisely, we provide an implementation of SPHINCS+ on the Snapdragon 865 Mobile Platform taking advantage of its eight CPUs and their vector extensions. Our implementation shows that it is possible to have a speed-up of 15 times when compared to a purely sequential and non-vectorized implementation. Furthermore, we evaluate the performance impact of side-channel protection using vector extensions in the SPHINCS+ version based on SHAKE.
Expand
Schwinn Saereesitthipitak, Dionysis Zindros
ePrint Report ePrint Report
Witness Encryption is a holy grail of cryptography that remains elusive. It asks that a secret is only revealed when a particular computational problem is solved. Modern smart contracts and blockchains make assumptions of “honest majority”, which allow for a social implementation of Witness Encryption. The core idea is to make use of a partially trusted committee to carry out the responsibilities mandated by these functionalities – such as keeping the secret private, and then releasing it publicly after a solution to the computational puzzle is presented. We implement Witness Encryption (with public witness security) in the form of an open source composable smart contract that can be utilized as an oracle by others within the broader DeFi ecosystem. We devise a cryptoeconomic scheme to incentivize honest participation, and analyze its security under the honest majority and rational majority settings. We conclude by measuring and optimizing gas costs and illustrating the practicality of our scheme.
Expand
Sreyosi Bhattacharyya, Kaushik Nath, Palash Sarkar
ePrint Report ePrint Report
This paper makes a comprehensive study of two important strategies for polynomial hashing over a prime order field $\mathbb{F}_p$, namely usual polynomial based hashing and hashing based on Bernstein-Rabin-Winograd (BRW) polynomials, and the various ways to combine them. Several hash functions are proposed and upper bounds on their differential probabilities are derived. Concrete instantiations are provided for the primes $p=2^{127}-1$ and $p=2^{130}-5$. A major contribution of the paper is an extensive 64-bit implementation of all the proposed hash functions in assembly targeted at modern Intel processors. The timing results suggest that using the prime $2^{127}-1$ is significantly faster than using the prime $2^{130}-5$. Further, a judicious mix of the usual polynomial based hashing and BRW-polynomial based hashing can provide a significantly faster alternative to only usual polynomial based hashing. In particular, the timing results of our implementations show that our final hash function proposal for the prime $2^{127}-1$ is much faster than the well known Poly1305 hash function defined over the prime $2^{130}-5$, achieving speed improvements of up to 40%.
Expand
◄ Previous Next ►