IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
31 January 2025
Reuven Yakar, Avishai Wool, Eyal Ronen
We first validate this hypothesis: We evaluate two commercial-grade GPU-based implementations of RSA within openSSL (called RNS and MP), under a wide range of overclocking levels and temperatures, and demonstrate that both implementations are vulnerable.
However, and more importantly, we show for the first time that even if the GPU is benignly overclocked to a seemingly ``safe'' rate, a successful attack can still be mounted, over the network, by simply sending requests at an aggressive rate to increase the temperature. Hence, setting any level of overclocking on the GPU is risky.
Moreover, we observe a huge difference in the implementations' vulnerability: the rate of RSA breaks for RNS is 4 orders of magnitude higher than that of MP. We attribute this difference to the implementations' memory usage patterns: RNS makes heavy use of the GPU's global memory, which is accessed via both the Unified (L1) cache and the L2 cache; MP primarily uses ``shared'' on-chip memory, which is local to each GPU Streaming MultiProcessor (SM) and is uncached, utilizing the memory banks used for the L1 cache. We believe that the computation faults are caused by reads from the global memory, which under a combination of overclocking, high temperature and high memory contention, occasionally return stale values.
George Kadianakis, Arantxa Zapico, Hossein Hafezi, Benedikt Bunz
Simon Holmgaard Kamp
This is resolved by attaching justifiers to all messages: forcing the adversary to choose between being ignored by the honest parties, or sending messages with certain validity properties. Using these we define validated proxcensus and show that it can be instantiated in asynchrony with the same recursive structure and round complexity as synchronous proxcensus. In asynchrony the extraction phase incurs a security loss of one bit which is recovered by expanding to twice as many grades using an extra round of communication. This results in a $\lambda+2$ round VABA and a $\lambda+3$ round BA, both with $2^{-\lambda}$ error probability and communication complexity matching Fitzi et al.
Karthikeyan Bhargavan, Maxime Buyse, Lucas Franceschino, Lasse Letager Hansen, Franziskus Kiefer, Jonas Schneider-Bensch, Bas Spitters
Nico Döttling, Jesko Dujmovic, Antoine Joux
In this work, we consider space analogues of timed cryptographic primitives, which we refer to as space-hard primitives. Roughly speaking, these notions require honest protocol parties to invest a certain amount of space and provide security against space constrained adversaries. While inefficient generic constructions of timed-primitives from strong assumptions such as indistinguishability obfuscation can be adapted to the space-hard setting, we currently lack concrete and versatile algebraically structured assumptions for space-hard cryptography. In this work, we initiate the study of space-hard primitives from concrete algebraic assumptions relating to the problem of root-finding of sparse polynomials. Our motivation to study this problem is a candidate construction of VDFs by Boneh et al. (CRYPTO 2018) which are based on the hardness of inverting permutation polynomials. Somewhat anticlimactically, our first contribution is a full break of this candidate. However, we then revise this hardness assumption by dropping the permutation requirement and considering arbitrary sparse high degree polynomials. We argue that this type of assumption is much better suited for space-hardness rather than timed cryptography. We then proceed to construct both space-lock puzzles and verifiable space-hard functions from this assumption.
Yevgeniy Dodis, Jiaxin Guan, Peter Hall, Alison Lin
In this work we revisit such everlasting privacy model of Dodis and Yeo (ITC'21), which we call Hypervisor EverLasting Privacy (HELP). HELP is a novel architecture for generating shared randomness using a network of semi-trusted servers (or "hypervisors"), trading the need to store/distribute large shared secrets with the assumptions that it is hard to: (a) simultaneously compromise too many publicly accessible ad-hoc servers; and (b) break a computationally-secure encryption scheme very quickly. While Dodis and Yeo presented good HELP solutions in the asymptotic sense, their solutions were concretely expensive and used heavy tools (like large finite fields or gigantic Toeplitz matrices).
We abstract and generalize the HELP architecture to allow for more efficient instantiations, and construct several concretely efficient HELP solutions. Our solutions use elementary cryptographic operations, such as hashing and message authentication. We also prove a very strong composition theorem showing that our EL architecture can use any message transmission method which is computationally-secure in the Universal Composability (UC) framework. This is the first positive composition result for everlasting privacy, which was otherwise known to suffer from many "non-composition" results (Müller-Quade and Unruh; J of Cryptology'10).
29 January 2025
Munich, Germany, 25 June -
Submission deadline: 31 March 2025
Notification: 30 April 2025
Cambridge, USA, 18 April 2025
Submission deadline: 10 February 2025
INSA Lyon, CITI Lab (Villeurbanne, France)
The CITI Lab at INSA Lyon, France, is seeking a motivated PhD student to engage in pioneering research in frugal cryptography.
The research project focuses on designing and analyzing cryptographic primitives, evaluating their energy consumption in various contexts such as Internet communication and Machine Learning. The PhD candidate will also develop generic tools and methodologies to assess the energy impact of cryptographic implementations. The work aims to create secure and efficient cryptographic solutions adapted to the needs of a digital and sustainable future.
This fully funded position has a 3-year duration, with a negotiable start date.
Responsibilities
- Collaborate with faculty and researchers to design innovative cryptographic protocols.
- Publish research findings in leading computer science conferences and journals.
- Participate in academic activities, including seminars, workshops, and conferences, to stay updated on advancements in the field.
- Potentially assist in teaching duties as a teaching assistant (TA).
Requirements
- A strong background in cryptography, with an MSc in Computer Science, Engineering, Mathematics, or a related discipline (preferred but not mandatory).
- Excellent communication and interpersonal skills, with the ability to thrive in a collaborative research environment.
- Strong organizational and time-management abilities to balance research, coursework, and teaching responsibilities.
- Critical thinking and analytical skills, with fluency in technical English.
- Proficiency in programming.
To Apply: Please submit your CV along with transcripts from both your Bachelor’s and Master’s degrees.
Closing date for applications:
Contact: Clementine Gritti (clementine.gritt(at)insa-lyon.fr)
28 January 2025
Rabiah Alnashwan, Benjamin Dowling, Bhagya Wimalasiri
Jeremiah Blocki, Seunghoon Lee
We verify that Coretti et al. (CRYPTO/EUROCRYPT'18)'s framework extends to settings with multiple idealized primitives and we apply this framework to analyze the multi-user security of (short) Schnorr Signatures and the CCA-security of PSEC-KEM against pre-processing attackers in the Random Oracle Model (ROM) plus the Generic Group Model (GGM). Prior work of Blocki and Lee (EUROCRYPT'22) used complicated compression arguments to analyze the security of {\em key-prefixed} short Schnorr signatures where the random oracle is salted with the user's public key. However, the security analysis did not extend to standardized implementations of Schnorr Signatures (e.g., BSI-TR-03111 or ISO/IEC 14888-3) which do not adopt key-prefixing, but take other measures to protect against preprocessing attacks by disallowing signatures that use a preimage of $0$. Blocki and Lee (EUROCRYPT'22) left the (in)security of such "nonzero Schnorr Signature" constructions as an open question. We fully resolve this open question demonstrating that (short) nonzero Schnorr Signatures are also secure against preprocessing attacks. We also analyze PSEC-KEM in the ROM+GGM demonstrating that this Key Encapsulation Mechanism (KEM) is CPA-secure against preprocessing attacks.
Jonas Bertels, Hilder V. L. Pereira, Ingrid Verbauwhede
Pierrick Gaudry, Julien Soumier, Pierre-Jean Spaenlehauer
Andrea Basso, Giacomo Borin, Wouter Castryck, Maria Corte-Real Santos, Riccardo Invernizzi, Antonin Leroux, Luciano Maino, Frederik Vercauteren, Benjamin Wesolowski
Minghang Li, Qianhong Wu, Zhipeng Wang, Bo Qin, Bohang Wei, Hang Ruan, Shihong Xiong, Zhenyang Ding
We propose TockOwl, the first asynchronous consensus protocol with fault adaptability. TockOwl features quadratic communication and constant round complexity, allowing it to remain efficient in fault-free scenarios. TockOwl also possesses crash robustness, enabling it to maintain stable performance when facing crash faults. These properties collectively ensure the fault adaptability of TockOwl.
Furthermore, we propose TockOwl+ that has network adaptability. TockOwl+ incorporates both fast and slow tracks and employs hedging delays, allowing it to achieve low latency comparable to partially synchronous protocols without waiting for timeouts in asynchronous environments. Compared to the latest dual-track protocols, the slow track of TockOwl+ is simpler, implying shorter latency in fully asynchronous environments.
Henry Bambury, Phong Q. Nguyen
Ryan Lehmkuhl, Alexandra Henzinger, Henry Corrigan-Gibbs
We construct a distributional-PIR scheme that makes black-box use of classic PIR protocols, and prove a lower bound on the server-runtime of a large class of distributional-PIR schemes. On two real-world popularity distributions, our distributional-PIR construction reduces compute costs by $5$-$77\times$ compared to existing techniques. Finally, we build CrowdSurf, an end-to-end system for privately fetching tweets, and show that distributional-PIR reduces the end-to-end server cost by $8\times$.
Xavier Bultel, Charles Olivier-Anclin
A natural fix to this model, already introduced in some previous work, is proposed in a corruption model where the attacker can generate the keys of certain users themselves, which seems much more coherent in a context where the group of users can be constructed in an ad hoc way at the time of signing. We believe that these two changes make the security model more realistic. Indeed, within the framework of this model, our counter-examples becomes insecure. Furthermore, we show that most of the schemes in the literature we surveyed appear to have been designed to achieve the security guaranteed by the latest model, which reinforces the idea that the model is closer to the informal intuition of what anonymity should be in linkable ring signatures.
Neekon Vafa, Vinod Vaikuntanathan
For $\mathrm{SBP}_\kappa$, statistically, solutions exist with $\kappa(x) = 2^{-\Theta(x)}$ (Aubin, Perkins and Zdeborova, Journal of Physics 2019). For large $n$, the best that efficient algorithms have been able to achieve is a far cry from the statistical bound, namely $\kappa(x) = \Theta(1/\sqrt{x})$ (Bansal and Spencer, Random Structures and Algorithms 2020). The problem has been extensively studied in the TCS and statistics communities, and Gamarnik, Kizildag, Perkins and Xu (FOCS 2022) conjecture that Bansal-Spencer is tight: namely, $\kappa(x) = \widetilde{\Theta}(1/\sqrt{x})$ is the optimal value achieved by computationally efficient algorithms. We prove their conjecture assuming the worst-case hardness of approximating the shortest vector problem on lattices.
For $\mathrm{NPP}_\kappa$, statistically, solutions exist with $\kappa(m) = \Theta(2^{-m})$ (Karmarkar, Karp, Lueker and Odlyzko, Journal of Applied Probability 1986). Karmarkar and Karp's classical differencing algorithm achieves $\kappa(m) = 2^{-O(\log^2 m)}~.$ We prove that Karmarkar-Karp is nearly tight: namely, no polynomial-time algorithm can achieve $\kappa(m) = 2^{-\Omega(\log^3 m)}$, once again assuming the worst-case subexponential hardness of approximating the shortest vector problem on lattices to within a subexponential factor.
Our hardness results are versatile, and hold with respect to different distributions of the matrix $\mathbf{A}$ (e.g., i.i.d. uniform entries from $[0,1]$) and weaker requirements on the solution vector $\mathbf{x}$.