IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
08 April 2021
Zezhou Hou, Jiongjiong Ren, Shaozhen Chen
ePrint Report
Cryptanalysis based on deep learning has become a hotspot in the international cryptography community since it was proposed. The key point of differential cryptanalysis based on deep learning is to find a neural differential distinguisher with longer rounds or higher probability. Therefore it is important to research how to improve the accuracy and the rounds of neural differential distinguisher. In this paper, we design SAT-based algorithms to find a good input difference so that the accuracy of the neural distinguisher can be improved as high as possible. As applications, we search and obtain the neural differential distinguishers of 9-round SIMON32/64, 10-round SIMON48/96 and 11-round SIMON64/128. For SIMON48/96, we choose $(0x0,0x100000)$ as the input difference and train 9-round and 10-round neural distinguishers of SIMON48/96. In addition, with the automatic search based on SAT, we extend the neural 9-round, 10-round distinguishers to 11-round, 12-round distinguishers by prepending the optimal 2-round differential transition $(0x400000,0x100001) \xrightarrow{2^{-4}}\left( 0x0,0x100000 \right)$. Based on the 11-round and 12-round neural distinguisher, we complete a 14-round key recovery attack of SIMON48/96. Our attack takes about 1550s to recover the final subkey. Its average time complexity is no more than $2^{22.21}$ 14-round encryption of SIMON48/96, and the data complexity is about $2^{12.8}$. Similar to 14-round key recovery attack, we perform 13-round key recovery attack for SIMON32/64 with input difference $(0x0,0x80)$ with a success rate of more than 90$\%$. It takes about 23s to complete an attack with the data complexity no more than $2^{12.5}$ and the time complexity no more than $2^{16.4}$. It is worth mentioning that the attacks are practical for 13-round SIMON32/64 and 14-round SIMON48/96.
Gang Wang
ePrint Report
Sharding technology is becoming a promising candidate to address the scalability issues in blockchain. The key concept behind sharding technology is to partition the network status into multiple distinct smaller committees, each of which handles a disjoint set of transactions to leverage its capability of parallel processing. However, when introducing sharding technology to blockchain, several key challenges need to be resolved, such as security and heterogeneity among the participating nodes. This paper introduces RepShard, a reputation-based blockchain sharding scheme that aims to achieve both linearly scaling efficiency and system security simultaneously. RepShard adopts a two-layer hierarchical chain structure, consisting of a reputation chain and independent transaction chains. Each transaction chain is maintained within its shard to record transactions, while the reputation chain is maintained by all shards to update the reputation score of each participating node. We leverage a novel reputation scheme to record each participating node's integrated and valid contribution to the system, in which we consider the heterogeneity of participating nodes (e.g., computational resources). The reputation score used in sharding and leader election processes maintains the balance and security of each shard. RepShard relies on verifiable relay transactions for cross-shard transactions to ensure consistency between distinct shards. By integrating reputation into the sharding protocol, our scheme can offer both scalability and security at the same time.
Gang Wang, Mark Nixon
ePrint Report
Reliable and verifiable public randomness is not only an essential building block in various cryptographic primitives, but also is a critical component in many distributed and decentralized protocols, e.g., blockchain sharding. A 'good' randomness generator should preserve several distinctive properties, such as public-verifiability, bias-resistance, unpredictability, and availability. However, it is a challenging task to generate such good randomness. For instance, a dishonest party may behave deceptively to bias the final randomness, which is toward his preferences. And this challenge is more serious in a distributed and decentralized system. Blockchain technology provides several promising features, such as decentralization, immutability, and trustworthiness. Due to extremely high overheads on both communication and computation, most existing solutions face an additional scalability issue. We propose a sharding-based scheme, RandChain, to obtain a practical scalable distributed and decentralized randomness attested by blockchain in large-scale applications. In RandChain, we eliminate the use of computation-heavy cryptographic operations, e.g., Publicly Verifiable Secret Sharing (PVSS), in prevalent approaches. We build a sub-routine, RandGene, which utilizes a commit-then-reveal strategy to establish local randomness, enforced by efficient Verifiable Random Function (VRF). RandGene generates the randomness based on statistical approaches, instead of cryptographic operations, to eliminate computational operations. RandChain maintains a two-layer hierarchical chain structure via a sharding scheme. The first level chain is maintained by RandGene within each shard to provide a verifiable randomness source by blockchain. The second level chain uses the randomnesses from each shard to build a randomness chain.
Gang Wang, Mark Nixon, Mike Boudreaux
ePrint Report
Process industries cover a wide set of industries, in which the processes are controlled by a combination of Distributed Control Systems (DCSs) and Programmable Logic Controllers (PLCs). These control systems utilize various measurements such as pressure, flow, and temperature to determine the state of the process and then use field devices such as valves and other actuating devices to manipulate the process. Since there are many different types of field devices and since each device is calibrated to its specific installation, when monitoring devices, it is important to be able to transfer not only the device measurement and diagnostics, but also characteristics about the device and the process in which it is installed. The current monitoring architecture however creates challenges for continuous monitoring and analysis of diagnostic data. In this paper, we present the design of an Industrial IoT system for supporting large-scale and continuous device condition monitoring and analysis in process control systems. The system design seamlessly integrates existing infrastructure (e.g., HART and WirelessHART networks, and DeltaV DCS) and newly developed hardware/software components (e.g., one-way data diode, IoT cellular architecture) together for control network data collection and streaming of the collected device diagnostic parameters to a private cloud to perform streaming data analytics designed for fault identification and prediction. A prototype system has been developed and supported by Emerson Automation Solutions and deployed in the field for design validation and long-term performance evaluation. To the best of our knowledge, this is the first ever publicly reported effort on IoT system design for process automation applications. The design can be readily extended for condition monitoring and analysis of many other industrial facilities and processes.
Ashrujit Ghoshal, Stefano Tessaro
ePrint Report
We study the memory-tightness of security reductions in public-key
cryptography, focusing in particular on Hashed ElGamal. We prove that
any straightline (i.e., without rewinding) black-box reduction
needs memory which grows linearly with the number of queries of the
adversary it has access to, as long as this reduction treats the
underlying group generically. This makes progress towards proving a
conjecture by Auerbach et al. (CRYPTO 2017), and is also the
first lower bound on memory-tightness for a concrete cryptographic
scheme (as opposed to generalized reductions across security
notions). Our proof relies on compression arguments in the generic
group model.
Daniel Noble
ePrint Report
Cuckoo Hashing is a dictionary data structure in which a data item is stored in a small constant number of possible locations. It has the appealing property that the data structure size is a small constant times larger than the combined size of all inserted data elements. However, many applications, especially cryptographic applications and Oblivious RAM, require insertions, builds and accesses to have a negligible failure probability, which standard Cuckoo Hashing cannot simultaneously achieve. An alternative proposal introduced by Kirsch et al. is to store elements which cannot be placed in the main table in a ``stash'', reducing the failure probability to $\bigoh(n^{-s})$ where $n$ is the table size and $s$ any constant stash size. This failure probability is still not negligible. Goodrich and Mitzenmacher showed that the failure probability can be made negligible in some parameter $N$ when $n = \Omega(log^7(N))$ and $s = \Theta(log N)$.
In this paper, I will explore these analyses, as well as the insightful alternative analysis of Aumüller et al. Following this, I present a tighter analysis which shows failure probability negligible in $N$ for all $n = \omega(\log(N))$ (which is asymptotically optimal) and I present explicit constants for the failure probability upper bound.
Chitchanok Chuengsatiansup, Damien Stehle
ePrint Report
We investigate the efficiency of a (module-)LWR-based PRF built using the GGM design. Our construction enjoys the security proof of the GGM construction and the (module-)LWR hardness assumption which is believed to be post-quantum secure. We propose GGM-based PRFs from PRGs with larger ratio of output to input. This reduces the number of PRG invocations which improves the PRF performance and reduces the security loss in the GGM security reduction. Our construction bridges the gap between practical and provably secure PRFs. We demonstrate the efficiency of our construction by providing parameters achieving at least 128-bit post-quantum security and optimized implementations utilizing AVX2 vector instructions. Our PRF requires, on average, only 39.4 cycles per output byte.
Anirudh C, Ashish Choudhury, Arpita Patra
ePrint Report
Verifiable Secret-Sharing (VSS) is a fundamental primitive in secure distributed computing. It is used as an important building block in several distributed computing tasks, such as Byzantine agreement and secure multi-party computation. VSS has been widely studied in various dimensions over the last three decades and several important results have been achieved related to the fault-tolerance, round-complexity and communication efficiency of VSS schemes. In this article, we consider VSS schemes with perfect security, tolerating computationally unbounded adversaries. We comprehensively survey the existing perfectly-secure VSS schemes in three different settings, namely synchronous, asynchronous and hybrid communication settings and provide the full details of each of the existing schemes in these settings. The aim of this survey is to provide a clear knowledge and foundation to researchers who are interested in knowing and extending the state-of-the-art perfectly-secure VSS schemes.
Daniel Nager, "Danny" Niu Jianfang
ePrint Report
This paper proposes a new public-key cryptosystem based on a quasigroup with the special property of "restricted-commutativity". We argue its security empirically and present constructions for key exchange and digital signature. To the best of our knowledge, our primitive and construction have no known polynomial-time attack from quantum computers yet.
06 April 2021
Cholun Kim
ePrint Report
Proxy signature is a kind of digital signature, in which a user called original signer can delegate his signing rights to another user called proxy signer and the proxy signer can sign messages on behalf of the original signer. Certificateless proxy signature (CLPS) means proxy signature in the certificateless setting in which there exists neither the certificate management issue as in traditional PKI nor private key escrow problem as in Identity-based setting. Up to now, a number of CLPS schemes have been proposed, but some of those schemes either lack formal security analysis or turn out to be insecure and others are less efficient because of using costly operations including bilinear pairings and map-to-point hashing on elliptic curve groups.
In this paper, we formalize the definition and security model of CLPS schemes. We then construct a pairing-free CLPS scheme from the standard ECDSA and prove its security in the random oracle model under the discrete semi-logarithm problems hardness assumption as in the provable security result of ECDSA.
Raluca Posteuca, Tomer Ashur
ePrint Report
Newly designed block ciphers are required to show resistance against known attacks, e.g., linear and differential cryptanalysis. Two widely used methods to do this are to employ an automated search tool (e.g., MILP, SAT/SMT, etc.) and/or provide a wide-trail argument. In both cases, the core of the argument consists of bounding the transition probability of the statistical property over an isolated non-linear operation, then multiply it by the number of such operations (e.g., number of active S-boxes).
In this paper we show that in the case of linear cryptanalysis such strategies can sometimes lead to a gap between the claimed security and the actual one, and that this gap can be exploited by a malicious designer. We introduce RooD, a block cipher with a carefully crafted backdoor. By using the means of the wide-trail strategy, we argue the resistance of the cipher against linear and differential cryptanalysis. However, the cipher has a key-dependent iterative linear approximation over 12 rounds, holding with probability 1. This property is based on the linear hull effect although any linear trail underlying the linear hull has probability smaller than 1.
Yukun Wang, Mingqiang Wang
ePrint Report
A software watermarking scheme enables one to embed a ``mark " (i.e., a message) into a program without significantly changing the functionality. Moreover, any removal of the watermark from a marked program is futile without significantly changing the functionality of the program. At present, the construction of software watermarking mainly focuses on watermarking pseudorandom functions (PRFs), watermarking public key encryption, watermarking signature, etc.
In this work, we construct new watermarking PRFs from lattices which provide collusion resistant and public extraction. Our schemes are the first to simultaneously achieve all of these properties. The key to the success of our new constructions lies in two parts. First, we relax the notion of functionality-preserving. In general, we require that a marked program (approximately) preserve the input/output behavior of the original program. For our scheme, the output circuit is divided into two parts, one for PRF output and the other for auxiliary functions. As a result, we only require the PRF output circuit to satisfy functionality-preserving. Second, the marking method we use is essentially different form the previous scheme. In general, the mark program will change the output of some special point. The extraction algorithm determines whether the circuit is marked by determining whether the output of some special points has been changed. In our schemes, we use the constrained signature to mark a PRF circuit.
In this work, we construct new watermarking PRFs from lattices which provide collusion resistant and public extraction. Our schemes are the first to simultaneously achieve all of these properties. The key to the success of our new constructions lies in two parts. First, we relax the notion of functionality-preserving. In general, we require that a marked program (approximately) preserve the input/output behavior of the original program. For our scheme, the output circuit is divided into two parts, one for PRF output and the other for auxiliary functions. As a result, we only require the PRF output circuit to satisfy functionality-preserving. Second, the marking method we use is essentially different form the previous scheme. In general, the mark program will change the output of some special point. The extraction algorithm determines whether the circuit is marked by determining whether the output of some special points has been changed. In our schemes, we use the constrained signature to mark a PRF circuit.
Wenshuo Guo, Fangwei Fu
ePrint Report
This paper presents two modifications for Loidreaus code-based cryptosystem. Loidreaus cryptosystem is a rank metric code-based cryptosystem constructed by using Gabidulin codes in the McEliece setting. Recently a polynomial-time key recovery attack was proposed to break Loidreaus cryptosystem in some cases. To prevent this attack, we propose the use of subcodes to disguise the secret codes in Modification I. In Modification II, we choose a random matrix of low column rank over F q to mix with the secret matrix. According to our analysis, these two modifications can both resist the existing structural attacks. Additionally, we adopt the systematic generator matrix of the public code to make a reduction in the public-key size. In additon to stronger resistance against structural attacks and more compact representation of public keys, our modifications also have larger information transmission rates.
Donghoon Chang, Meltem Sonmez Turan
ePrint Report
Grain-128AEAD is one of the second-round candidates of the NIST lightweight cryptography standardization process. There is an existing body of third-party analysis on the earlier versions of the Grain family that provide insights on the security of Grain-128AEAD. Different from the earlier versions, Grain-128AEAD reintroduces the key into the internal state during the initialization. The designers claim that internal state recovery no longer results in key recovery, due to this change. In this paper, we analyze this claim under different scenarios.
Toomas Krips, Helger Lipmaa
ePrint Report
Efficient shuffle arguments are essential in mixnet-based e-voting
solutions. Terelius and Wikström (TW) proposed a 5-round shuffle
argument based on unique factorization in polynomial rings. Their argument
is available as the Verificatum software solution for real-world
developers, and has been used in real-world elections. It is also the
fastest non-patented shuffle argument. We will use the same basic idea as
TW but significantly optimize their approach. We generalize the TW
characterization of permutation matrices; this enables us to reduce the
communication without adding too much to the computation. We make the TW
shuffle argument computationally more efficient by using Groth's
coefficient-product argument (JOC, 2010). Additionally, we use batching
techniques. The resulting shuffle argument is the fastest known $\leq
5$-message shuffle argument, and, depending on the implementation, can be
faster than Groth's argument (the fastest 7-message shuffle argument).
Nikolaj Sidorenco, Sabine Oechsner, Bas Spitters
ePrint Report
Zero-knowledge proofs allow a prover to convince a verifier of the veracity of a statement without revealing any other information. An interesting class of zero-knowledge protocols are those following the MPC-in-the-head paradigm (Ishai et al., STOC 07) which use secure multiparty computation (MPC) protocols as the basis. Efficient instances of this paradigm have emerged as an active research topic in the last years, starting with ZKBoo (Giacomelli et al., USENIX 16). Zero-knowledge protocols are a vital building block in the design of privacy-preserving technologies as well as cryptographic primitives like digital signature schemes that provide post-quantum security.
This work investigates the security of zero-knowledge protocols following the MPC-in-the-head paradigm. We provide the first machine-checked security proof of such a protocol on the example of ZKBoo. Our proofs are checked in the EasyCrypt proof assistant. To enable a modular security proof, we develop a new security notion for the MPC protocols used in MPC-in-the-head zero-knowledge protocols. This allows us to recast existing security proofs in a black-box fashion which we believe to be of independent interest.
Duc-Phong Le, Sze Ling Yeo, Khoongming Khoo
ePrint Report
An algebraic differential fault attack (ADFA) is an attack in which an attacker combines a differential fault attack and an algebraic technique to break a targeted cipher. In this paper, we present three attacks using three different algebraic techniques combined with a differential fault attack in the bit-flip fault model to break the SIMON block cipher. First, we introduce a new analytic method that is based on a differential trail between the correct and faulty ciphertexts. This method is able to recover the entire master key of any member of the SIMON family by injecting faults into a single round of the cipher. In our second attack, we present a simplified Grobner basis algorithm to solve the faulty system. We show that this method could totally break SIMON ciphers with only 3 to 5 faults injected. Our third attack combines a fault attack with a modern SAT solver. By guessing some key bits and with only a single fault injected at the round T - 6, where T is the number of rounds of a SIMON cipher, this combined attack could manage to recover a master key of the cipher. For the last two attacks, we perform experiments to demonstrate the effectiveness of our attacks. These experiments are implemented on personal computers and run at very reasonable timing
Elaine Shi, Ke Wu
ePrint Report
Anonymous routing is one of the most fundamental online privacy
problems and has been studied extensively for decades.
Almost all known approaches
for anonymous routing
(e.g., mix-nets, DC-nets, and others)
rely on multiple servers or routers to engage
in some {\it interactive} protocol; and anonymity is
guaranteed in the {\it threshold} model, i.e.,
if one or more of the servers/routers behave honestly.
Departing from all prior approaches, we propose a novel {\it non-interactive} abstraction called a Non-Interactive Anonymous Router (NIAR), which works even with a {\it single untrusted router}. In a NIAR scheme, suppose that $n$ senders each want to talk to a distinct receiver. A one-time trusted setup is performed such that each sender obtains a sending key, each receiver obtains a receiving key, and the router receives a {\it token} that ``encrypts'' the permutation mapping the senders to receivers. In every time step, each sender can encrypt its message using its sender key, and the router can use its token to convert the $n$ ciphertexts received from the senders to $n$ {\it transformed ciphertexts}. Each transformed ciphertext is delivered to the corresponding receiver, and the receiver can decrypt the message using its receiver key. Imprecisely speaking, security requires that the untrusted router, even when colluding with a subset of corrupt senders and/or receivers, should not be able to compromise the privacy of honest parties, including who is talking to who, and the message contents.
We show how to construct a communication-efficient NIAR scheme with provable security guarantees based on the standard Decisional Linear assumption in suitable bilinear groups. We show that a compelling application of NIAR is to realize a Non-Interactive Anonymous Shuffler (NIAS), where an untrusted server or data analyst can only decrypt a permuted version of the messages coming from $n$ senders where the permutation is hidden. NIAS can be adopted to construct privacy-preserving surveys, differentially private protocols in the shuffle model, and pseudonymous bulletin boards.
Besides this main result, we also describe a variant that achieves fault tolerance when a subset of the senders may crash. Finally, we further explore a paranoid notion of security called full insider protection, and show that if we additionally assume sub-exponentially secure Indistinguishability Obfuscation and as sub-exponentially secure one-way functions, one can construct a NIAR scheme with paranoid security.
Departing from all prior approaches, we propose a novel {\it non-interactive} abstraction called a Non-Interactive Anonymous Router (NIAR), which works even with a {\it single untrusted router}. In a NIAR scheme, suppose that $n$ senders each want to talk to a distinct receiver. A one-time trusted setup is performed such that each sender obtains a sending key, each receiver obtains a receiving key, and the router receives a {\it token} that ``encrypts'' the permutation mapping the senders to receivers. In every time step, each sender can encrypt its message using its sender key, and the router can use its token to convert the $n$ ciphertexts received from the senders to $n$ {\it transformed ciphertexts}. Each transformed ciphertext is delivered to the corresponding receiver, and the receiver can decrypt the message using its receiver key. Imprecisely speaking, security requires that the untrusted router, even when colluding with a subset of corrupt senders and/or receivers, should not be able to compromise the privacy of honest parties, including who is talking to who, and the message contents.
We show how to construct a communication-efficient NIAR scheme with provable security guarantees based on the standard Decisional Linear assumption in suitable bilinear groups. We show that a compelling application of NIAR is to realize a Non-Interactive Anonymous Shuffler (NIAS), where an untrusted server or data analyst can only decrypt a permuted version of the messages coming from $n$ senders where the permutation is hidden. NIAS can be adopted to construct privacy-preserving surveys, differentially private protocols in the shuffle model, and pseudonymous bulletin boards.
Besides this main result, we also describe a variant that achieves fault tolerance when a subset of the senders may crash. Finally, we further explore a paranoid notion of security called full insider protection, and show that if we additionally assume sub-exponentially secure Indistinguishability Obfuscation and as sub-exponentially secure one-way functions, one can construct a NIAR scheme with paranoid security.
Sonia Belaïd, Matthieu Rivain, Abdul Rahman Taleb
ePrint Report
The random probing model is a leakage model in which each wire of a circuit leaks with a given probability $p$. This model enjoys practical relevance thanks to a reduction to the noisy leakage model, which is admitted as the right formalization for power and electromagnetic side-channel attacks. In addition, the random probing model is much more convenient than the noisy leakage model to prove the security of masking schemes. In a recent work, Ananth, Ishai and Sahai (CRYPTO 2018) introduce a nice expansion strategy to construct random probing secure circuits. Their construction tolerates a leakage probability of $2^{-26}$, which is the first quantified achievable leakage probability in the random probing model. In a follow-up work, Bela\"id, Coron, Prouff, Rivain and Taleb (CRYPTO 2020) generalize their idea and put forward a complete and practical framework to generate random probing secure circuits. The so-called expanding compiler can bootstrap simple base gadgets as long as they satisfy a new security notion called random probing expandability (RPE). They further provide an instantiation of the framework which tolerates a $2^{-8}$ leakage probability in complexity $\mathcal{O}(\kappa^{7.5})$ where $\kappa$ denotes the security parameter.
In this paper, we provide an in-depth analysis of the RPE security notion. We exhibit the first upper bounds for the main parameter of a RPE gadget, which is known as the amplification order. We further show that the RPE notion can be made tighter and we exhibit strong connections between RPE and the strong non-interference (SNI) composition notion. We then introduce the first generic constructions of gadgets achieving RPE for any number of shares and with nearly optimal amplification orders and provide an asymptotic analysis of such constructions. Last but not least, we introduce new concrete constructions of small gadgets achieving maximal amplification orders. This allows us to obtain much more efficient instantiations of the expanding compiler: we obtain a complexity of $\mathcal{O}(\kappa^{3.9})$ for a slightly better leakage probability, as well as $\mathcal{O}(\kappa^{3.2})$ for a slightly lower leakage probability.
In this paper, we provide an in-depth analysis of the RPE security notion. We exhibit the first upper bounds for the main parameter of a RPE gadget, which is known as the amplification order. We further show that the RPE notion can be made tighter and we exhibit strong connections between RPE and the strong non-interference (SNI) composition notion. We then introduce the first generic constructions of gadgets achieving RPE for any number of shares and with nearly optimal amplification orders and provide an asymptotic analysis of such constructions. Last but not least, we introduce new concrete constructions of small gadgets achieving maximal amplification orders. This allows us to obtain much more efficient instantiations of the expanding compiler: we obtain a complexity of $\mathcal{O}(\kappa^{3.9})$ for a slightly better leakage probability, as well as $\mathcal{O}(\kappa^{3.2})$ for a slightly lower leakage probability.
Aaram Yun
ePrint Report
In the quantum random oracle model, the adversary may make quantum superposition queries to the random oracle. Since even a single query can potentially probe exponentially many points, classical proof techniques are hard to apply. For example, recording the oracle queries seemed difficult.
In 2018, Mark Zhandry showed that, despite the apparent difficulties, it is in fact possible to record the quantum queries. He has defined the compressed oracle, which is indistinguishable from the quantum random oracle, and records information the adversary has gained through the oracle queries. It is a technically subtle work, which we believe to be a challenging work to grasp fully.
Our aim is to obtain a mathemathically clean, simple reinterpretation of the compressed oracle technique. For each partial function, we define what we call the formation and the completion of that partial function. The completions describe what happens to the real quantum random oracle, and the formations describe what happens to the compressed oracle. We will show that the formations are 'isomorphic' to the completions, giving an alternative proof that the compressed oracle is indistinguishable from the quantum random oracle.
In 2018, Mark Zhandry showed that, despite the apparent difficulties, it is in fact possible to record the quantum queries. He has defined the compressed oracle, which is indistinguishable from the quantum random oracle, and records information the adversary has gained through the oracle queries. It is a technically subtle work, which we believe to be a challenging work to grasp fully.
Our aim is to obtain a mathemathically clean, simple reinterpretation of the compressed oracle technique. For each partial function, we define what we call the formation and the completion of that partial function. The completions describe what happens to the real quantum random oracle, and the formations describe what happens to the compressed oracle. We will show that the formations are 'isomorphic' to the completions, giving an alternative proof that the compressed oracle is indistinguishable from the quantum random oracle.