Papers updated in last 7 days (74 results)

Last updated:  2025-01-26
DEEP Commitments and Their Applications
Alan Szepieniec
This note studies a method of committing to a polynomial in a way that allows executions of low degree tests such as FRI to be batched and even deferred. In particular, it achieves (unlimited-depth) aggregation for STARKs.
Last updated:  2025-01-26
Efficient 2PC for Constant Round Secure Equality Testing and Comparison
Tianpei Lu, Xin Kang, Bingsheng Zhang, Zhuo Ma, Xiaoyuan Zhang, Yang Liu, Kui Ren, and Chun Chen
Secure equality testing and comparison are two important primitives widely used in many secure computation scenarios, such as privacy-preserving machine learning, private set intersection, and secure data mining, etc. This work proposes new constant-round two-party computation (2PC) protocols for secure equality testing and comparison. Our protocols are designed in the online/offline paradigm. For 32-bit inputs, the online communication cost of our equality testing protocol and secure comparison protocol are as low as 76 bits (1\% of ABY) and 384 bits (5\% of ABY) , respectively. Our benchmarks show that (i) for 32-bit equality testing, our scheme performs $9 \times$ faster than the Guo \emph{et al.} (EUROCRYPT 2023) and $15 \times$ of the garbled circuit (GC) with the half-gate optimization (CRYPTO 2015). (ii) for 32-bit secure comparison, our scheme performs $3 \times$ faster than Guo \emph{et al.} (EUROCRYPT 2023), $6 \times$ faster than both Rathee \emph{et al.} (CCS 2020) and GC with the half-gate optimization.
Last updated:  2025-01-25
Twist and Shout: Faster memory checking arguments via one-hot addressing and increments
Srinath Setty and Justin Thaler
A memory checking argument enables a prover to prove to a verifier that it is correctly processing reads and writes to memory. They are used widely in modern SNARKs, especially in zkVMs, where the prover proves the correct execution of a CPU including the correctness of memory operations. We describe a new approach for memory checking, which we call the method of one-hot addressing and increments. We instantiate this method via two different families of protocols, called Twist and Shout. Twist supports read/write memories, while Shout targets read-only memories (also known as lookup arguments). Both Shout and Twist have logarithmic verifier costs. Unlike prior works, these protocols do not invoke "grand product" or "grand sum" arguments. Twist and Shout significantly improve the prover costs of prior works across the full range of realistic memory sizes, from tiny memories (e.g., 32 registers as in RISC-V), to memories that are so large they cannot be explicitly materialized (e.g., structured lookup tables of size $2^{64}$ or larger, which arise in Lasso and the Jolt zkVM). Detailed cost analysis shows that Twist and Shout are well over 10x cheaper for the prover than state-of-the-art memory-checking procedures configured to have logarithmic proof length. Prior memory-checking procedures can also be configured to have larger proofs. Even then, we estimate that Twist and Shout are at least 2--4x faster for the prover in key applications. Finally, using Shout, we provide two fast-prover SNARKs for non-uniform constraint systems, both of which achieve minimal commitment costs (the prover commits only to the witness): (1) SpeedySpartan applies to Plonkish constraints, substantially improving the previous state-of-the-art protocol, BabySpartan; and (2)Spartan++ applies to CCS (a generalization of Plonkish and R1CS), improving prover times over the previous state-of-the-art protocol, Spartan, by 6x.
Last updated:  2025-01-25
Tighter Security for Schnorr Identification and Signatures: A High-Moment Forking Lemma for $\Sigma$-Protocols
Uncategorized
Lior Rotem and Gil Segev
Show abstract
Uncategorized
The Schnorr identification and signature schemes have been amongst the most influential cryptographic protocols of the past three decades. Unfortunately, although the best-known attacks on these two schemes are via discrete-logarithm computation, the known approaches for basing their security on the hardness of the discrete logarithm problem encounter the ``square-root barrier''. In particular, in any group of order $p$ where Shoup's generic hardness result for the discrete logarithm problem is believed to hold (and is thus used for setting concrete security parameters), the best-known $t$-time attacks on the Schnorr identification and signature schemes have success probability $t^2/p$, whereas existing proofs of security only rule out attacks with success probabilities $(t^2/p)^{1/2}$ and $(q_{\mathcal{H}} \cdot t^2/p)^{1/2}$, respectively, where $q_{\mathcal{H}}$ denotes the number of random-oracle queries issued by the attacker. We establish tighter security guarantees for identification and signature schemes which result from $\Sigma$-protocols with special soundness based on the hardness of their underlying relation, and in particular for Schnorr's schemes based on the hardness of the discrete logarithm problem. We circumvent the square-root barrier by introducing a high-moment generalization of the classic forking lemma, relying on the assumption that the underlying relation is ``$d$-moment hard'': The success probability of any algorithm in the task of producing a witness for a random instance is dominated by the $d$-th moment of the algorithm's running time. In the concrete context of the discrete logarithm problem, already Shoup's original proof shows that the discrete logarithm problem is $2$-moment hard in the generic-group model, and thus our assumption can be viewed as a highly-plausible strengthening of the discrete logarithm assumption in any group where no better-than-generic algorithms are currently known. Applying our high-moment forking lemma in this context shows that, assuming the $2$-moment hardness of the discrete logarithm problem, any $t$-time attacker breaks the security of the Schnorr identification and signature schemes with probabilities at most $(t^2/p)^{2/3}$ and $(q_{\mathcal{H}} \cdot t^2/p)^{2/3}$, respectively.
Last updated:  2025-01-25
VECTIS: Efficient Batching Framework for Group-based CP-SNARKs
Byeongjun Jang, Gweonho Jeong, Hyuktae Kwon, Hyunok Oh, and Jihye Kim
Blockchain applications in finance and identity management increasingly require scalable and privacy-preserving solutions. Cryptographic commitments secure sensitive data on-chain, but verifying properties of these commitments efficiently remains challenging, particularly in large-scale scenarios. For multiple commitments, CP-SNARKs, a family of zk-SNARKs, enhance prover efficiency by shifting large-cost operations outside the circuit and verifying linkages between commitments, but incur verifier-side overhead due to linkage checks. Verification costs grow with the number of commitments, leading to inefficiencies in key size, proof size, and verification time. We propose $\textbf{VECTIS}$, an efficient batching framework for proving multiple commitments. Our approach aggregates multiple commitments into a single batched commitment, enabling the linking proof system to operate on the aggregated commitment instead of individual commitments, thereby significantly reducing the overall verification cost.%streamlining the verification process and improving efficiency. Experimental results show meaningful efficiency gains. For $2^{16}$ commitments, $\textbf{VECTIS}$ reduces the verification time to $0.064$s, achieving over $30\times$ improvement compared to LegoSNARK’s $1.972$s. These results show $\textbf{VECTIS}$’s potential for enabling scalable and efficient privacy-preserving solutions in blockchain applications.
Last updated:  2025-01-25
One-shot Signatures and Applications to Hybrid Quantum/Classical Authentication
Ryan Amos, Marios Georgiou, Aggelos Kiayias, and Mark Zhandry
We define the notion of one-shot signatures, which are signatures where any secret key can be used to sign only a single message, and then self-destructs. While such signatures are of course impossible classically, we construct one-shot signatures using quantum no-cloning. In particular, we show that such signatures exist relative to a classical oracle, which we can then heuristically obfuscate using known indistinguishability obfuscation schemes. We show that one-shot signatures have numerous applications for hybrid quantum/classical cryptographic tasks, where all communication is required to be classical, but local quantum operations are allowed. Applications include one-time signature tokens, quantum money with classical communication, decentralized blockchain-less cryptocurrency, signature schemes with unclonable secret keys, non-interactive certifiable min-entropy, and more. We thus position one-shot signatures as a powerful new building block for novel quantum cryptographic protocols.
Last updated:  2025-01-24
Constrained Pseudorandom Functions for Inner-Product Predicates from Weaker Assumptions
Sacha Servan-Schreiber
In this paper, we provide a novel framework for constructing constrained pseudorandom functions (CPRFs) with inner-product constraint predicates, using ideas from subtractive secret sharing and related-key-attack security. Our framework can be instantiated using a random oracle or any suitable related-key-attack (RKA) secure pseudorandom function. This results in three new CPRF constructions: 1. an adaptively-secure construction in the random oracle model; 2. a selectively-secure construction under the DDH assumption; and 3. a selectively-secure construction with a polynomial domain under the assumption that one-way functions exist. All three instantiations are constraint-hiding and support inner-product predicates, leading to the first constructions of such expressive CPRFs under each corresponding assumption. Moreover, while the OWF-based construction is primarily of theoretical interest, the random oracle and DDH-based constructions are concretely efficient, which we show via an implementation.
Last updated:  2025-01-24
QuietOT: Lightweight Oblivious Transfer with a Public-Key Setup
Geoffroy Couteau, Lalita Devadas, Srinivas Devadas, Alexander Koch, and Sacha Servan-Schreiber
Oblivious Transfer (OT) is at the heart of secure computation and is a foundation for many applications in cryptography. Over two decades of work have led to extremely efficient protocols for evaluating OT instances in the preprocessing model, through a paradigm called OT extension. A few OT instances generated in an offline phase can be used to perform many OTs in an online phase efficiently, i.e., with very low communication and computational overheads. Specifically, traditional OT extension protocols use a small number of “base” OTs, generated using any black-box OT protocol, and convert them into many OT instances using only lightweight symmetric-key primitives. Recently, a new paradigm of OT with a *public-key setup* has emerged, which replaces the base OTs with a non-interactive setup: Using only the public key of the other party, two parties can efficiently compute a virtually unbounded number of OT instances on-the-fly. In this paper, we put forth a novel framework for OT extension with a public-key setup and concretely efficient instantiations. An implementation of our framework is 30-100 times faster when compared to the previous state-of-the-art public-key OT protocols, and remains competitive even when compared to OT protocols that *do not* offer a public-key setup. Additionally, our instantiations result in the first public-key schemes with plausible post-quantum security. In summary, this paper contributes: - QuietOT: A framework for OT extension with a public-key setup that uses fast, symmetric-key primitives to generate OT instances following a one-time public-key setup, and offering additional features such as precomputability. - A public-key setup for QuietOT from the RingLWE assumption, resulting in the first post-quantum construction of OT extension with a public-key setup. - An optimized, open-source implementation of our construction that can generate up to 1M OT extensions per second on commodity hardware. In contrast, the state-of-the-art public-key OT protocol is limited to approximately 20K OTs per second. - The first formal treatment of the security of OT with a public-key setup in a multi-party setting, which addresses several subtleties that were overlooked in prior work.
Last updated:  2025-01-24
Application of Mordell-Weil lattices with large kissing numbers to acceleration of multi-scalar multiplication on elliptic curves
Dmitrii Koshelev
This article aims to speed up (the precomputation stage of) multi-scalar multiplication (MSM) on ordinary elliptic curves of $j$-invariant $0$ with respect to specific ``independent'' (a.k.a. ``basis'') points. For this purpose, so-called Mordell--Weil lattices (up to rank $8$) with large kissing numbers (up to $240$) are employed. In a nutshell, the new approach consists in obtaining more efficiently a considerable number (up to $240$) of certain elementary linear combinations of the ``independent'' points. By scaling the point (re)generation process, it is thus possible to get a significant performance gain. As usual, the resulting curve points can be then regularly used in the main stage of an MSM algorithm to avoid repeating computations. Seemingly, this is the first usage of lattices with large kissing numbers in cryptography, while such lattices have already found numerous applications in other mathematical domains. Without exaggeration, MSM is a widespread primitive (often the unique bottleneck) in modern protocols of real-world elliptic curve cryptography. Moreover, the new (re)generation technique is prone to further improvements by considering Mordell--Weil lattices with even greater kissing numbers.
Last updated:  2025-01-24
A Compact and Parallel Swap-Based Shuffler based on butterfly Network and its complexity against Side Channel Analysis
Jong-Yeon Park, Wonil Lee, Bo Gyeong Kang, Il-jong Song, Jaekeun Oh, and Kouichi Sakurai
A prominent countermeasure against side channel attacks, the hiding countermeasure, typically involves shuffling operations using a permutation algorithm. Especially in the era of Post-Quantum Cryptography, the importance of the hiding coun- termeasure is emphasized due to computational characteristics like those of lattice and code-based cryptography. In this context, swiftly and securely generating permutations has a critical impact on an algorithm’s security and efficiency. The widely adopted Fisher-Yates shuffle, because of its high security and ease of implementation, is prevalent. However, it has a limitation of complexity O(𝑁) due to its sequential nature. In response, we propose a time-area trade-off swap algorithm, FSS, based on the Butterfly Network with only log(𝑁) depth, log(𝑁) works and O(1) operation time in parallel. We will calculate the maximum gain that an attacker can achieve through butterfly operations with only log(𝑁) depth from side channel analysis perspective. In particular, we will show that it is possible to derive a generalized formula of the attack complexity with higher-order side channel attacks for arbitrary input sizes through a fractal structure of the butterfly network. Furthermore, our research highlights the possibility of generating efficient and secure permutations utilizing a minimal amount of randomness.
Last updated:  2025-01-24
A Formal Analysis of Apple’s iMessage PQ3 Protocol
Felix Linker, Ralf Sasse, and David Basin
We present the formal verification of Apple’s iMessage PQ3, a highly performant, device-to-device messaging protocol offering strong security guarantees even against an adversary with quantum computing capabilities. PQ3 leverages Apple’s identity services together with a custom, post-quantum secure initialization phase and afterwards it employs a double ratchet construction in the style of Signal, extended to provide post-quantum, post-compromise security. We present a detailed formal model of PQ3, a precise specification of its fine-grained security properties, and machine-checked security proofs using the TAMARIN prover. Particularly novel is the integration of post-quantum secure key encapsulation into the relevant protocol phases and the detailed security claims along with their complete formal analysis. Our analysis covers both key ratchets, including unbounded loops, which was believed by some to be out of scope of symbolic provers like TAMARIN (it is not!).
Last updated:  2025-01-24
Signatures with Tight Adaptive Corruptions from Search Assumptions
Keitaro Hashimoto, Wakaha Ogata, and Yusuke Sakai
We construct the first tightly secure signature schemes in the multi-user setting with adaptive corruptions from classical discrete logarithm, RSA, factoring, or post-quantum group action discrete logarithm assumption. In contrast to our scheme, the previous tightly secure schemes are based on the decisional assumption (e.g., (group action) DDH) or interactive search assumptions (e.g., one-more CDH). The security of our schemes is independent of the number of users, signing queries, and RO queries, and forging our signatures is as hard as solving the underlying search problem. Our starting point is an identification scheme with multiple secret keys per public key (e.g., Okamoto identification (CRYPTO'92) and parallel-OR identification (CRYPTO'94)). This property allows a reduction to solve a search problem while answering corruption queries for all users in the signature security game. To convert such an identification scheme into a signature scheme tightly, we employ randomized Fischlin's transformation introduced by Kondi and shelat (Asiacrypt 2022) that provides straight-line extraction. Intuitively, the transformation properties guarantee the tight security of our signature scheme in the programmable random oracle model, but we successfully prove its tight security in the non-programmable random oracle model.
Last updated:  2025-01-24
Delegated Multi-party Private Set Intersection from Secret Sharing
Jingwei Hu, Zhiqi Liu, and Cong Zuo
In this work, we address the problem of Delegated PSI (D-PSI), where a cloud server is introduced to handle most computational and communication tasks. D-PSI enables users to securely delegate their private sets to the cloud, ensuring the privacy of their data while allowing efficient computation of the intersection. The cloud operates under strict security requirements, learning nothing about the individual sets or the intersection result. Moreover, D-PSI minimizes user-to-user communication and supports "silent" processing, where the cloud can perform computations independently without user interaction, apart from set delegation and result retrieval. We formally define the D-PSI problem and propose a novel construction that extends beyond two-party scenarios to support multi-party settings. Our construction adheres to the D-PSI requirements, including security against semi-honest adversaries, and achieves computational and communication complexities close to the ideal "perfect" D-PSI protocol. Additionally, we demonstrate the practicality of our approach through a baseline implementation and an optimized version that further reduces computational overhead. Our results establish a strong foundation for secure and efficient PSI in real-world cloud computing scenarios.
Last updated:  2025-01-24
NTRU+Sign: Compact NTRU-Based Signatures Using Bimodal Distributions
Joo Woo, Jonghyun Kim, Ga Hee Hong, Seungwoo Lee, Minkyu Kim, Hochang Lee, and Jong Hwan Park
We present a new lattice-based signature scheme, called ‘NTRU+Sign’, using the Fiat-Shamir with Aborts framework. The proposed scheme is designed based on a novel NTRU-based key structure that fits well with bimodal distributions, enabling efficiency improvements compared to its predecessor, BLISS. The novel NTRU-based key structure is characterized by: (1) effectively changing a modulus from 2q to q, which is different from the existing usage of 2q for bimodal distributions, and (2) drastically reducing the magnitude of a secret key, which directly leads to compactness of signature sizes. We provide two concrete parameter sets for NTRU+Sign, supporting 93-bit and 211-bit security levels. Using the technique from GALACTICS (that was suggested as the constant-time implementation of BLISS), our analysis shows that NTRU+Sign achieves a good balance between computational efficiency and signature compactness, with constant-time implementation. For instance, at the NIST-3 security level, NTRU+Sign produces signatures that are significantly smaller than Dilithium and HAETAE, while providing faster verification speeds. These advantages position NTRU+Sign as a competitive and practical solution for real-world deployments.
Last updated:  2025-01-24
GCKSign: Simple and Efficient Signatures from Generalized Compact Knapsacks
Joo Woo, Kwangsu Lee, and Jong Hwan Park
In 2009, Lyubashevsky proposed a lattice-based signature scheme by applying the Fiat-Shamir transformation and proved its security under the generalized compact knapsack (GCK) problem. This scheme has a simple structure but has large signature and key sizes due to the security requirement of their security reduction. Dilithium, which was submitted to the NIST Post-Quantum Cryptography standardization and selected as one of the final candidates, is an improvement of the Lyubashevsky's signature scheme and decreases key and signature sizes by modifying the form of a public key and including additional steps in key generation, signing, and verification algorithms. Thus, Dilithium has a more complex structure to implement compared to the Lyubashevsky's scheme. To combine the strength of both signature schemes, we modify the Lyubashevsky's signature scheme and present a new security proof that removes their security requirement. As a result, we propose a simple and practical GCKSign signature scheme based on the hardness of a new GCK assumption, called target-modified one-wayness of GCK function. The signature size of our signature scheme decreases 40 percent, the sum of signature and public key sizes decreases 25 percent, and the secret key size decreases 90 percent for the NIST security level III, compared to Dilithium. Furthermore, by the simplicity of our structure, the key generation, signing, and verification algorithms of our scheme run 2.4$\times$, 1.7$\times$, and 2.0$\times$ faster than those of Dilithium, respectively.
Last updated:  2025-01-24
Comprehensive Robustness Analysis of GCM, CCM, and OCB3
Akiko Inoue, Tetsu Iwata, and Kazuhiko Minematsu
Clarifying the robustness of authenticated encryption (AE) schemes, such as security under nonce misuse or Release of Unverified Plaintext (RUP), is critically important due to the extensive use of AEs in real-world applications. We present a comprehensive analysis of the robustness of well-known standards, namely GCM, CCM, and OCB3. Despite many existing studies, we uncovered several robustness properties for them that were not known in the literature. In particular, we show that both GCM and CCM maintain authenticity under RUP. Moreover, CCM keeps this feature even if a nonce is misused. Together with existing analysis, our work gives a complete picture of the robustness of these standards for the first time. Our results also imply several new robust AE schemes based on GCM and CCM.
Last updated:  2025-01-23
EQSIGN: Practical Digital Signatures from the Non-Abelian Hidden Subgroup Problem and Information Theoretic Equivocation
Samuel Lavery
We present a novel digital signature scheme grounded in non-commutative cryptography and implemented over a bilinear matrix group platform. At the core of our design is a unique equivocation function that obfuscates intermediate elements, effectively concealing outputs and minimizing observable information leakage. To the best of our knowledge, this is the first digital signature scheme to combine information-theoretic security with computational hardness, relying on a challenging instance of the Non-Abelian Hidden Subgroup Problem (NAHSP) and strengthened by practical guarantees. This dual-layered security approach ensures robustness against both classical and quantum adversaries while maintaining communication overheads competitive with RSA. Our work represents a significant advancement toward efficient, quantum-resilient digital signatures for real-world applications. This paper is an early pre-release intended to invite collaboration and feedback. The work is presented for research purposes only and is not intended for use in production systems.
Last updated:  2025-01-23
Discrete gaussian sampling for BKZ-reduced basis
Amaury Pouly and Yixin Shen
Discrete Gaussian sampling on lattices is a fundamental problem in lattice-based cryptography. In this paper, we revisit the Markov chain Monte Carlo (MCMC)-based Metropolis-Hastings-Klein (MHK) algorithm proposed by Wang and Ling and study its complexity under the Geometric Series Assuption (GSA) when the given basis is BKZ-reduced. We give experimental evidence that the GSA is accurate in this context, and we give a very simple approximate formula for the complexity of the sampler that is accurate over a large range of parameters and easily computable. We apply our results to the dual attack on LWE of [Pouly and Shen 2024] and significantly improve the complexity estimates of the attack. Finally, we provide some results of independent interest on the Gaussian mass of a random $q$-ary lattices.
Last updated:  2025-01-23
Better Codes for the HQC Cryptosystem
Cyrius Nugier and Jean-Christophe Deneuville
In the HQC cryptosystem, the length $n$ of the code determines several concrete parameters such as the bandwidth usage, the memory consumption, or the decoding efficiency. In this paper, we show that currently known methods to explicitly generate asymptotically good (especially with high relative distances), binary codes with efficient associated procedures cannot be used to improve $n$. We also show that concatenated codes are currently better suited, and by exhausting small codes, find a closer to optimal concatenated code for HQC, which improves upon currently used codes.
Last updated:  2025-01-23
Verifiable Oblivious Pseudorandom Functions from Lattices: Practical-ish and Thresholdisable
Martin R. Albrecht and Kamil Doruk Gur
We revisit the lattice-based verifiable oblivious PRF construction from PKC'21 and remove or mitigate its central three sources of inefficiency. First, applying Rényi divergence arguments, we eliminate one superpolynomial factor from the ciphertext modulus \(q\), allowing us to reduce the overall bandwidth consumed by RLWE samples by about a factor of four. This necessitates us introducing intermediate unpredictability notions to argue PRF security of the final output in the Random Oracle model. Second, we remove the reliance on the \(\mathsf{1D-SIS}\) assumption, which reduces another superpolynomial factor, albeit to a factor that is still superpolynomial. Third, by applying the state-of-the-art in zero-knowledge proofs for lattice statements, we achieve a reduction in bandwidth of several orders of magnitude for this material. Finally, we give a \(t\)-out-of-\(n\) threshold variant of the VOPRF for constant \(t\) and with trusted setup, based on a \(n\)-out-of-\(n\) distributed variant of the VOPRF (and without trusted setup).
Last updated:  2025-01-23
How To Simulate It - A Tutorial on the Simulation Proof Technique
Yehuda Lindell
One of the most fundamental notions of cryptography is that of \emph{simulation}. It stands behind the concepts of semantic security, zero knowledge, and security for multiparty computation. However, writing a simulator and proving security via the use of simulation is a non-trivial task, and one that many newcomers to the field often find difficult. In this tutorial, we provide a guide to how to write simulators and prove security via the simulation paradigm. Although we have tried to make this tutorial as stand-alone as possible, we assume some familiarity with the notions of secure encryption, zero-knowledge, and secure computation.
Last updated:  2025-01-23
Post-Quantum Threshold Ring Signature Applications from VOLE-in-the-Head
James Hsin-Yu Chiang, Ivan Damgård, William R. Duro, Sunniva Engan, Sebastian Kolby, and Peter Scholl
We propose efficient, post-quantum threshold ring signatures constructed from one-wayness of AES encryption and the VOLE-in-the-Head zero-knowledge proof system. Our scheme scales efficiently to large rings and extends the linkable ring signatures paradigm. We define and construct key-binding deterministic tags for signature linkability, that also enable succinct aggregation with approximate lower bound arguments of knowledge; this allows us to achieve succinct aggregation of our signatures without SNARKs. Finally, we extend our threshold ring signatures to realize post-quantum anonymous ledger transactions in the spirit of Monero. Our constructions assume symmetric key primitives only. Whilst it is common to build post-quantum signatures from the one-wayness property of AES and a post-quantum NIZK scheme, we extend this paradigm to define and construct novel security properties from AES that are useful for advanced signature applications. We introduce key-binding and pseudorandomness of AES to establish linkability and anonymity of our threshold ring signatures from deterministic tags, and similarly establish binding and hiding properties of block ciphers modeled as ideal permutations to build commitments from AES, a crucial building block for our proposed post-quantum anonymous ledger scheme.
Last updated:  2025-01-23
Masking Gaussian Elimination at Arbitrary Order, with Application to Multivariate- and Code-Based PQC
Quinten Norga, Suparna Kundu, Uttam Kumar Ojha, Anindya Ganguly, Angshuman Karmakar, and Ingrid Verbauwhede
Digital signature schemes based on multivariate- and code-based hard problems are promising alternatives for lattice-based signature schemes, due to their small signature size. Gaussian Elimination (GE) is a critical operation in the signing procedure of these schemes. In this paper, we provide a masking scheme for GE with back substitution to defend against first- and higher-order attacks. To the best of our knowledge, this work is the first to analyze and propose masking techniques for multivariate- or code-based DS algorithms. We propose a masked algorithm for transforming a system of linear equations into row-echelon form. This is realized by introducing techniques for efficiently making leading (pivot) elements one while avoiding costly conversions between Boolean and multiplicative masking at all orders. We also propose a technique for efficient masked back substitution, which eventually enables a secure unmasking of the public output. All novel gadgets are proven secure in the $t$-probing model. Additionally, we evaluate the overhead of our countermeasure for several post-quantum candidates and their different security levels at first-, second-, and third-order, including UOV, MAYO, SNOVA, QR-UOV, and MQ-Sign. Notably, the operational cost of first-, second-, and third-order masked GE is 2.3$\times$ higher, and the randomness cost is 1.2$\times$ higher in MAYO compared to UOV for security levels III and V. In contrast, these costs are similar in UOV and MAYO for one version of level I. We also show detailed performance results for masked GE implementations for all three security versions of UOV on the Arm Cortex-M4 and compare them with unmasked results. Our masked implementation targeting UOV parameters has an overhead of factor 15.1$\times$, 15.2$\times$, and 15.4$\times$ compared to the unprotected implementation for NIST security level I, III, and V.
Last updated:  2025-01-23
Onion Franking: Abuse Reports for Mix-Based Private Messaging
Matthew Gregoire, Margaret Pierce, and Saba Eskandarian
The fast-paced development and deployment of private messaging applications demands mechanisms to protect against the concomitant potential for abuse. While widely used end-to-end encrypted (E2EE) messaging systems have deployed mechanisms for users to verifiably report abusive messages without compromising the privacy of unreported messages, abuse reporting schemes for systems that additionally protect message metadata are still in their infancy. Existing solutions either focus on a relatively small portion of the design space or incur much higher communication and computation costs than their E2EE brethren. This paper introduces new abuse reporting mechanisms that work for any private messaging system based on onion encryption. This includes low-latency systems that employ heuristic or opportunistic mixing of user traffic, as well as schemes based on mixnets. Along the way, we show that design decisions and abstractions that are well-suited to the E2EE setting may actually impede security and performance improvements in the metadata-hiding setting. We also explore stronger threat models for abuse reporting and moderation not explored in prior work, showing where prior work falls short and how to strengthen both our scheme and others' -- including deployed E2EE messaging platforms -- to achieve higher levels of security. We implement a prototype of our scheme and find that it outperforms the best known solutions in this setting by well over an order of magnitude for each step of the message delivery and reporting process, with overheads almost matching those of message franking techniques used by E2EE encrypted messaging apps today.
Last updated:  2025-01-23
Rudraksh: A compact and lightweight post-quantum key-encapsulation mechanism
Suparna Kundu, Archisman Ghosh, Angshuman Karmakar, Shreyas Sen, and Ingrid Verbauwhede
Resource-constrained devices such as wireless sensors and Internet of Things (IoT) devices have become ubiquitous in our digital ecosystem. These devices generate and handle a major part of our digital data. However, due to the impending threat of quantum computers on our existing public-key cryptographic schemes and the limited resources available on IoT devices, it is important to design lightweight post-quantum cryptographic (PQC) schemes suitable for these devices. In this work, we explored the design space of learning with error-based PQC schemes to design a lightweight key-encapsulation mechanism (KEM) suitable for resource-constrained devices. We have done a scrupulous and extensive analysis and evaluation of different design elements, such as polynomial size, field modulus structure, reduction algorithm, {and} secret and error distribution of an LWE-based KEM. Our explorations led to the proposal of a lightweight PQC-KEM, Rudraksh, without compromising security. Our scheme provides security against chosen ciphertext attacks (CCA) with more than 100 bits of Core-SVP post-quantum security and belongs to the NIST-level-I security category (provide security at least as much as AES-128). We have also shown how ASCON can be used for lightweight pseudo-random number generation and hash function in the lattice-based KEMs instead of the widely used Keccak for lightweight design. Our FPGA results show that Rudraksh currently requires the least area among the PQC KEMs of similar security. Our implementation of Rudraksh provides a $\sim3\times$ improvement in terms of the area requirement compared to the state-of-the-art area-optimized implementation of Kyber, can operate at $63\%$-$76\%$ higher frequency with respect to high-throughput Kyber, and improves time-area-product $\sim2\times$ compared to the state-of-the-art compact implementation of Kyber published in HPEC 2022.
Last updated:  2025-01-23
Public-Key Quantum Money From Standard Assumptions (In The Generic Model)
Jake Doliskani
Our main result is a quantum polynomial-time reduction from the group action discrete logarithm (DLP) problem to a specific cloning problem. A consequence of this result is that the public-key quantum money scheme proposed by Zhandry (2024), based on abelian group actions, is secure in the generic group action model. Specifically, our result shows that breaking the quantum money scheme is equivalent, under quantum polynomial-time reductions, to solving the group action DLP. Two immediate implications of our results are: i) A separation between quantum money and quantum lightning. This separation arises because our reduction is non-uniform, and quantum lightning is not secure against non-uniform adversaries. ii) Cloning vs. preparing Fourier states. Our main theorem shows that the problem of cloning group action Fourier states is equivalent to the problem of preparing these states.
Last updated:  2025-01-23
Post-Quantum Stealth Address Protocols
Marija Mikić, Mihajlo Srbakoski, and Strahinja Praška
The Stealth Address Protocol (SAP) allows users to receive assets through stealth addresses that are unlinkable to their stealth meta-addresses. The most widely used SAP, Dual-Key SAP (DKSAP), and the most performant SAP, Elliptic Curve Pairing Dual-Key SAP (ECPDKSAP), are based on elliptic curve cryptography, which is vulnerable to quantum attacks. These protocols depend on the elliptic curve discrete logarithm problem, which could be efficiently solved on a sufficiently powerful quantum computer using the Shor algorithm. In this paper three novel post-quantum SAPs based on lattice-based cryptography are presented: LWE SAP, Ring-LWE SAP and Module-LWE SAP. These protocols leverage Learning With Errors (LWE) problem to ensure quantum-resistant privacy. Among them, Module-LWE SAP, which is based on the Kyber key encapsulation mechanism, achieves the best performance and outperforms ECPDKSAP by approximately 66.8% in the scan time of the ephemeral public key registry.
Last updated:  2025-01-23
Adversary Resilient Learned Bloom Filters
Allison Bishop and Hayder Tirmazi
The Learned Bloom Filter is a recently proposed data structure that combines the Bloom Filter with a Learning Model while preserving the Bloom Filter's one-sided error guarantees. Creating an adversary-resilient construction of the Learned Bloom Filter with provable guarantees is an open problem. We define a strong adversarial model for the Learned Bloom Filter. Our adversarial model extends an existing adversarial model designed for the Classical (i.e. not ``Learned'') Bloom Filter by prior work and considers computationally bounded adversaries that run in probabilistic polynomial time (PPT). Using our model, we construct an adversary-resilient variant of the Learned Bloom Filter called the Downtown Bodega Filter. We show that: if pseudo-random permutations exist, then an Adversary Resilient Learned Bloom Filter may be constructed with $2\lambda$ extra bits of memory and at most one extra pseudo-random permutation in the critical path. We construct a hybrid adversarial model for the case where a fraction of the query workload is chosen by an adversary. We show realistic scenarios where using the Downtown Bodega Filter gives better performance guarantees compared to alternative approaches in this hybrid model.
Last updated:  2025-01-23
On the structure of the Schur squares of Twisted Generalized Reed-Solomon codes and application to cryptanalysis
Alain Couvreur, Rakhi Pratihar, Nihan Tanisali, and Ilaria Zappatore
Twisted generalized Reed-Solomon (TGRS) codes constitute an interesting family of evaluation codes, containing a large class of maximum distance separable codes non-equivalent to generalized Reed-Solomon (GRS) ones. Moreover, the Schur squares of TGRS codes may be much larger than those of GRS codes with same dimension. Exploiting these structural differences, in 2018, Beelen, Bossert, Puchinger and Rosenkilde proposed a subfamily of Maximum Distance Separable (MDS) Twisted Reed--Solomon (TRS) codes over $\mathbb{F}_q$ with $\ell$ twists $q \approx n^{2^{\ell}}$ for McEliece encryption, claiming their resistance to both Sidelnikov Shestakov attack and Schur products--based attacks. In short, they claimed these codes to resist to classical key recovery attacks on McEliece encryption scheme instantiated with Reed-Solomon (RS) or GRS codes. In 2020, Lavauzelle and Renner presented an original attack on this system based on the computation of the subfield subcode of the public TRS code. In this paper, we show that the original claim on the resistance of TRS and TGRS codes to Schur products based--attacks is wrong. We identify a broad class of codes including TRS and TGRS ones that is distinguishable from random by computing the Schur square of some shortening of the code. Then, we focus on the case of single twist ({i.e.}, $\ell = 1$), which is the most efficient one in terms of decryption complexity, to derive an attack. The technique is similar to the distinguisher-based attacks of RS code-based systems given by Couvreur, Gaborit, Gauthier-Umaña, Otmani, Tillich in 2014.
Last updated:  2025-01-23
Verification-efficient Homomorphic Signatures for Verifiable Computation over Data Streams
Gaspard Anthoine, Daniele Cozzo, and Dario Fiore
Homomorphic signatures for NP (HSNP) allow proving that a signed value is the result of a non-deterministic computation on signed inputs. At CCS'22, Fiore and Tucker introduced HSNP, showed how to use them for verifying arbitrary computations on data streams, and proposed a generic HSNP construction obtained by efficiently combining zkSNARKs with linearly homomorphic signatures (LHS), namely those supporting linear functions. Their proposed LHS however suffered from an high verification cost. In this work we propose an efficient LHS that significantly improves on previous work in terms of verification time. Using the modular approach of Fiore and Tucker, this yields a verifier-efficient HSNP. We show that the HSNP instantiated with our LHS is particularly suited to the case when the data is taken from consecutive samples, which captures important use cases including sliding window statistics such as variances, histograms and stock market predictions.
Last updated:  2025-01-23
A Formal Treatment of Homomorphic Encryption Based Outsourced Computation in the Universal Composability Framework
Wasilij Beskorovajnov, Sarai Eilebrecht, Yufan Jiang, and Jörn Mueller-Quade
The adoption of Homomorphic Encryption (HE) and Secure Function Evaluation (SFE) applications in the real world remains lim- ited, even nearly 50 years after the introduction of HE. This is particu- larly unfortunate given the strong privacy and confidentiality guarantees these tools can offer to modern digital life. While attempting to incorporate a simple straw-man PSI protocol into a web service for matching individuals based on their profiles, we en- countered several shortcomings in current outsourcing frameworks. Ex- isting outsourced protocols either require clients to perform tasks beyond merely contributing their inputs or rely on a non-collusion assumption between a server and a client, which appears implausible in standard web service scenarios. To address these issues, we present, to the best of our knowledge, the first general construction for non-interactive outsourced computation based on black-box homomorphic encryption. This approach relies on a non- collusion assumption between two dedicated servers, which we consider more realistic in a web-service setting. Furthermore, we provide a proof of our construction within the Universal Composability (UC) framework, assuming semi-honest (i.e., passive) adversaries. Unlike general one-sided two-party SFE protocols, our construction addi- tionally requires sender privacy. Specifically, the sender must contribute its inputs solely in encrypted form. This ensures stronger privacy guar- antees and broadens the applicability of the protocol. Overall, the range of applications for our construction includes all one- sided two-party sender-private SFE protocols as well as server-based arithmetic computations on encrypted inputs. Finally, we demonstrate the practical applicability of our general outsourced computation frame- work by applying it to the specific use case of Outsourced Private Set Intersection (OPSI) in a real-world scenario, accompanied by a detailed evaluation of its efficiency.
Last updated:  2025-01-23
Subset sum, a new insight
Samir Bouftass
In this paper, we show that subset sum problem consists on finding a solution over $\mathbb{N}_2$ of equation $n = \textbf{A}X \bullet U$ where A and n are given matrix and integer and U = $[(2^0) (2^1) .... (2^{d-2}) (2^{d-1})]$. We show that it can be subdivized into 2 solvable subproblems.
Last updated:  2025-01-23
dCTIDH: Fast & Deterministic CTIDH
Fabio Campos, Andreas Hellenbrand, Michael Meyer, and Krijn Reijnders
This paper presents dCTIDH, a CSIDH implementation that combines two recent developments into a novel state-of-the-art deterministic implementation. We combine the approach of deterministic variants of CSIDH with the batching strategy of CTIDH, which shows that the full potential of this key space has not yet been explored. This high-level adjustment in itself leads to a significant speed-up. To achieve an effective deterministic evaluation in constant time, we introduce Wombats, a new approach to performing isogenies in batches, specifically tailored to the behavior required for deterministic CSIDH using CTIDH batching. Furthermore, we explore the two-dimensional space of optimal primes for dCTIDH, with regard to both the performance of dCTIDH in terms of finite-field operations per prime and the efficiency of finite-field operations, determined by the prime shape, in terms of cycles. This allows us to optimize both for choice of prime and scheme parameters simultaneously. Lastly, we implement and benchmark constant-time, deterministic dCTIDH. Our results show that dCTIDH not only outperforms state-of-the-art deterministic CSIDH, but even non-deterministic CTIDH: dCTIDH-2048 is faster than CTIDH-2048 by 17 percent, and is almost five times faster than dCSIDH-2048.
Last updated:  2025-01-22
Additive Randomized Encodings from Public Key Encryption
Nir Bitansky, Saroja Erabelli, and Rachit Garg
Introduced by Halevi, Ishai, Kushilevitz, and Rabin (CRYPTO 2023), Additive randomized encodings (ARE) reduce the computation of a $k$-party function $f(x_1,\dots,x_k)$ to locally computing encodings $\hat x_i$ of each input $x_i$ and then adding them together over some Abelian group into an output encoding $\hat y = \sum \hat x_i$, which reveals nothing but the result. The appeal of ARE comes from the simplicity of the non-local computation, involving only addition. This gives rise for instance to non-interactive secure function evaluation in the shuffle model where messages from different parties are anonymously shuffled before reaching their destination. Halevi, Ishai, Kushilevitz, and Rabin constructed ARE based on Diffie-Hellman type assumptions in bilinear groups. We construct ARE assuming public-key encryption. The key insight behind our construction is that one-sided ARE, which only guarantees privacy for one of the parties, are relatively easy to construct, and yet can be lifted to full-fledged ARE. We also give a more efficient black-box construction from the CDH assumption.
Last updated:  2025-01-22
Technology-Dependent Synthesis and Optimization of Circuits for Small S-boxes
Zihao Wei, Siwei Sun, Fengmei Liu, Lei Hu, and Zhiyu Zhang
Boolean formula minimization is a notoriously hard problem that is known to be $\varSigma_2^P$-complete. Circuit minimization, typically studied in the context of a much broader subject known as synthesis and optimization of circuits, introduces another layer of complexity since ultimately those technology-independent epresentations (e.g., Boolean formulas and truth tables) has to be transformed into a netlist of cells of the target technology library. To manage those complexities, the industrial community typically separates the synthesis process into two steps: technology-independent optimization and technology mapping. In each step, this approach only tries to find the local optimal solution and relies heavily on heuristics rather than a systematic search. However, for small S-boxes, a more systematic exploration of the design space is possible. Aiming at the global optimum, we propose a method which can synthesize a truth table for a small S-box directly into a netlist of the cells of a given technology library. Compared with existing technology-dependent synthesis tools like LIGHTER and PEIGEN, our method produces improved results for many S-boxes with respect to circuit area. In particular, by applying our method to the $\mathbb{F}_{2^4}$-inverter involved in the tower field implementation of the AES S-box, we obtain the currently known lightest implementation of the AES S-box. The search framework can be tweaked to take circuit delay into account. As a result, we find implementations for certain S-boxes with both latency and area improved.
Last updated:  2025-01-22
An Introduction to Protein Cryptography
Hayder Tirmazi and Tien Phuoc Tran
We introduce protein cryptography, a recently proposed method that encodes data into the amino acid sequences of proteins. Unlike traditional digital encryption, this approach relies on the inherent diversity, complexity, and replication resistance of biological macromolecules, making them highly secure against duplication or tampering. The experimental realization of protein cryptography remains an open problem. To accelerate experimental progress in this area, we provide an accessible and self-contained introduction to the fundamentals of cryptography for biologists with limited mathematical and computational backgrounds. Furthermore, we outline a framework for encoding, synthesizing, and decoding information using proteins. By enabling biologists to actively engage in the development of protein cryptography, this work bridges disciplinary boundaries and paves the way for applications in secure data storage.
Last updated:  2025-01-22
A practical distinguisher on the full Skyscraper permutation
Antoine Bak
Skyscraper is a cryptographic permutation published in TCHES 2025, optimized for use in proof systems such as PlonK. This primitive is based on a 10-round Feistel network combining $x^2$ monomials and lookup-based functions to achieve competitive plain performances and efficiency in proof systems supporting lookups. In terms of security, the $x^2$ monomials are supposed to provide security against statistical attacks, while lookups are supposed to provide security against algebraic attacks. In this note, we show that this primitive has a much lower security margin than expected. Using a rebound attack, we find practical truncated differentials on the full permutation. As a corollary, we also find a practical collision attack on the compression function based on a 9-round Skyscraper permutation, which significantly reduces the security margin of the primitive. All of these attacks have been implemented and work in practice.
Last updated:  2025-01-22
Towards Optimally Small Smoothness Bounds for Cryptographic-Sized Twin Smooth Integers and their Isogeny-based Applications
Bruno Sterner
We give a new approach for finding large smooth twins. Those twins whose sum is a prime are of interest in the parameter setup of certain isogeny-based cryptosystems such as SQIsign. The approach to find such twins is to find two polynomials in $\mathbb{Q}[x]$ that split into a product of small degree factors and differ by $1$. Then evaluate them on a particular smooth integer. This was first explored by Costello, Meyer and Naehrig at EUROCRYPT'21 using polynomials that split completely into linear factors which were found using Diophantine number theory. The polynomials used in this work split into mostly linear factors with the exception of a few quadratic factors. Some of these linear factors are repeated and so the overall smoothness probability is either better or comparable to that of the prior polynomials. We use these polynomials to search for large smooth twins whose sum is prime. In particular, the smoothness bounds of the $384$ and $512$-bit twins that we find are significantly smaller than those found in EUROCRYPT'21.
Last updated:  2025-01-22
Beyond Optimal Fault-Tolerance
Andrew Lewis-Pye and Tim Roughgarden
One of the most basic properties of a consensus protocol is its fault-tolerance--the maximum fraction of faulty participants that the protocol can tolerate without losing fundamental guarantees such as safety and liveness. Because of its importance, the optimal fault-tolerance achievable by any protocol has been characterized in a wide range of settings. For example, for state machine replication (SMR) protocols operating in the partially synchronous setting, it is possible to simultaneously guarantee consistency against $\alpha$-bounded adversaries (i.e., adversaries that control less than an $\alpha$ fraction of the participants) and liveness against $\beta$-bounded adversaries if and only if $\alpha + 2\beta \leq 1$. This paper characterizes to what extent ``better-than-optimal'' fault-tolerance guarantees are possible for SMR protocols when the standard consistency requirement is relaxed to allow a bounded number $r$ of consistency violations, each potentially leading to the rollback of recently finalized transactions. We prove that bounding rollback is impossible without additional timing assumptions and investigate protocols that tolerate and recover from consistency violations whenever message delays around the time of an attack are bounded by a parameter $\Delta^*$ (which may be arbitrarily larger than the parameter $\Delta$ that bounds post-GST message delays in the partially synchronous model). Here, a protocol's fault-tolerance can be a non-constant function of $r$, and we prove, for each $r$, matching upper and lower bounds on the optimal ``recoverable fault-tolerance'' achievable by any SMR protocol. For example, for protocols that guarantee liveness against 1/3-bounded adversaries in the partially synchronous setting, a 5/9-bounded adversary can always cause one consistency violation but not two, and a 2/3-bounded adversary can always cause two consistency violations but not three. Our positive results are achieved through a generic ``recovery procedure'' that can be grafted on to any accountable SMR protocol and restores consistency following a violation while rolling back only transactions that were finalized in the previous $2\Delta^*$ timesteps.
Last updated:  2025-01-22
Shifting our knowledge of MQ-Sign security
Lars Ran and Monika Trimoska
Unbalanced Oil and Vinegar (UOV) is one of the oldest, simplest, and most studied ad-hoc multivariate signature schemes. UOV signature schemes are attractive because they have very small signatures and fast verification. On the downside, they have large public and secret keys. As a result, variations of the traditional UOV scheme are usually developed with the goal to reduce the key sizes. Seven variants of UOV were submitted to the additional call for digital signatures by NIST, prior to which, a variant named MQ-Sign was submitted to the (South) Korean post-quantum cryptography competition (KpqC). MQ-Sign is currently competing in the second round of KpqC with two variants. One of the variants corresponds to the classic description of UOV with certain implementation and parameter choices. In the other variant, called MQ-Sign-LR, a part of the central map is constructed from row shifts of a single matrix. This design makes for smaller secret keys, and in the case where the equivalent keys optimization is used, it also leads to smaller public keys. However, we show in this work that the polynomial systems arising from an algebraic attack have a specific structure that can be exploited. Specifically, we are able to find preimages for $d$-periodic targets under the public map with a probability of $63\%$ for all security levels. The complexity of finding these preimages, as well as the fraction of $d$-periodic target increases with $d$ and hence provides a trade-off. We show that for all security levels one can choose $d=\frac{v}{2}$, for $v$ the number of vinegar variables, and reduce the security claim. Our experiments show practical running times for lower $d$ ranging from 6 seconds to 14 hours.
Last updated:  2025-01-22
Unveiling Privacy Risks in Quantum Optimization Services
Mateusz Leśniak, Michał Wroński, Ewa Syta, and Mirosław Kutyłowski
As cloud-based quantum computing services, such as those offered by D-Wave, become more popular for practical applications, privacy-preserving methods (such as obfuscation) are essential to address data security, privacy, and legal compliance concerns. Several efficient obfuscation methods have been proposed, which do not increase the time complexity of solving the obfuscated problem, for quantum optimization problems. These include {\em sign reversing}, {\em variable permutation}, and the combination of both methods assumed to provide greater protection. Unfortunately, sign reversing has already been shown to be insecure. We present two attacks on variable permutation and the combined method, where it is possible to efficiently recover the deobfuscated problem, particularly when given access to the obfuscated problem and its obfuscated solution, as a cloud-based quantum provider would have. Our attacks are in the context of an optimization problem of cryptanalysis of the Trivium cipher family, but our approach generalizes to other similarly structured problems. Our attacks are efficient and practical. Deobfuscating an optimization problem with \( n \) variables obfuscated with the combined method has a complexity of $O(n^2)$ compared to the complexity of $O\left( n \cdot n! \cdot 2^n \right)$ of the brute force attack. We provide an implementation of our attack; using a commodity laptop, our attack using the full Trivium cipher takes less than two minutes if optimized. We also present possible countermeasures to mitigate our attacks and bring attention to the need for further development in this area.
Last updated:  2025-01-22
Meet-in-the-Middle Attack on Primitives with Binary Matrix Linear Layer
Qingliang Hou, Kuntong Li, Guoyan Zhang, Yanzhao Shen, Qidi You, and Xiaoyang Dong
Meet-in-the-middle (MitM) is a powerful approach for the cryptanalysis of symmetric primitives. In recent years, MitM has led to many improved records about key recovery, preimage and collision attacks with the help of automated tools. However, most of the previous work target $\texttt{AES}$-like hashing where the linear layer is an MDS matrix. And we observe that their automatic model for MDS matrix is not suitable for primitives using a binary matrix as their linear layer. In this paper, we propose the $\texttt{n-XOR}$ model to describe the $\texttt{XOR}$ operation with an arbitrary number of inputs. And it can be applied to primitives with a binary matrix of arbitrary size. Then, we propose a check model to eliminate the possible inaccuracies caused by $\texttt{n-XOR}$. But the check model is limited by the input size (not greater than 4). Combined with the two new models, we find a MitM key recovery attack on 11-round $\texttt{Midori64}$. When the whitening keys are excluded, a MitM key recovery attack can be mounted on the 12-round $\texttt{Midori64}$. Compared with the previous best work, both of the above results have distinct advantages in terms of reducing memory and data complexity. At last, we apply the $\texttt{n-XOR}$ model to the hashing modes of primitives with large size binary matrix. The preimage attack on weakened $\texttt{camellia}-{\tt MMO}$ (without $FL/FL^{-1}$ and whitening layers) and $\texttt{Aria}-{\tt DM}$ are both improved by 1 round.
Last updated:  2025-01-22
On Gaussian Sampling for $q$-ary Lattices and Linear Codes with Lee Weight
Maiara F. Bollauf, Maja Lie, and Cong Ling
We show that discrete Gaussian sampling for a $q$-ary lattice is equivalent to codeword sampling for a linear code over $\mathbb{Z}_q$ with the Lee weight. This insight allows us to derive the theta series of a $q$-ary lattice from the Lee weight distribution of the associated code. We design a novel Gaussian sampler for $q$-ary lattices assuming an oracle that computes the symmetrized weight enumerator of the associated code. We apply this sampler to well-known lattices, such as the $E_8$, Barnes-Wall, and Leech lattice, highlighting both its advantages and limitations, which depend on the underlying code properties. For certain root lattices, we show that the sampler is indeed efficient, forgoing the need to assume an oracle. We also discuss applications of our results in digital signature schemes and the Lattice Isomorphism Problem. In many cases, our sampler achieves a significant speed-up compared to state-of-the-art sampling algorithms in cryptographic applications.
Last updated:  2025-01-22
Non-Interactive Distributed Point Functions
Elette Boyle, Lalita Devadas, and Sacha Servan-Schreiber
Distributed Point Functions (DPFs) are a useful cryptographic primitive enabling a dealer to distribute short keys to two parties, such that the keys encode additive secret shares of a secret point function. However, in many applications of DPFs, no single dealer entity has full knowledge of the secret point function, necessitating the parties to run an interactive protocol to emulate the setup. Prior works have aimed to minimize complexity metrics of such distributed setup protocols, e.g., round complexity, while remaining black-box in the underlying cryptography. We construct Non-Interactive DPFs (NIDPF), which have a one-round (simultaneous-message, semi-honest) setup protocol, removing the need for a trusted dealer. Specifically, our construction allows each party to publish a special "public key" to a public channel or bulletin board, where the public key encodes the party's secret function parameters. Using the public key of another party, any pair of parties can locally derive a DPF key for the point function parameterized by the two parties' joint inputs. We realize NIDPF from an array of standard assumptions, including DCR, SXDH, QR, and LWE. Each party's public key is of size $O(N^{2/3})$, for point functions with a domain of size $N$, which leads to a sublinear communication setup protocol. The only prior approach to realizing such a non-interactive setup required using multi-key fully-homomorphic encryption or indistinguishability obfuscation. As immediate applications of our construction, we get "public-key setups" for several existing constructions of pseudorandom correlation generators and round-efficient protocols for secure comparisons.
Last updated:  2025-01-22
Multi-Key Homomorphic Secret Sharing
Geoffroy Couteau, Lalita Devadas, Aditya Hegde, Abhishek Jain, and Sacha Servan-Schreiber
Homomorphic secret sharing (HSS) is a distributed analogue of fully-homomorphic encryption (FHE), where subsequent to an input-sharing phase, parties can locally compute a function over their shares to obtain shares of the function output. Over the last decade, HSS schemes have been constructed from an array of different assumptions. However, all schemes require a public-key or correlated-randomness setup. This limitation carries over to the applications of HSS. In this work, we construct multi-key homomorphic secret sharing (MKHSS), where given only a common reference string (CRS), two parties can secret share their inputs to each other and then perform local computations as in HSS. We present the first MKHSS schemes supporting all NC1 computations from either the Decisional Diffie-Hellman (DDH), Decisional Composite Residuosity (DCR), or class group assumptions. Our constructions imply the following applications in the CRS model: - Succinct two-round secure computation. Under the same assumptions as our MKHSS schemes, we construct succinct, two-round secure two-party computation for NC1 circuits. Previously, such a result was only known from the learning with errors assumption. - Attribute-based NIKE. Under DCR or class group assumptions, we construct non-interactive key exchange (NIKE) protocols where two parties agree on a key if and only if their secret attributes satisfy a public NC1 predicate. This significantly generalizes the existing notion of password-based NIKE. - Public-key PCFs. Under DCR or class group assumptions, we construct public-key pseudorandom correlation functions (PCFs) for any NC1 correlation. This yields the first public-key PCFs for Beaver triples (and more) from non-lattice assumptions. - Silent MPC. Under DCR or class group assumptions, we construct a p-party secure computation protocol in the silent preprocessing model where the preprocessing phase has communication O(p), ignoring polynomial factors. All prior protocols that do not rely on spooky encryption require $Ω(p^2)$ communication.
Last updated:  2025-01-22
Zero-Knowledge Proofs of Quantumness
Duong Hieu Phan, Weiqiang Wen, Xingyu Yan, and Jinwei Zheng
With the rapid development of quantum computers, proofs of quantumness have recently become an interesting and intriguing research direction. However, in all current schemes for proofs of quantumness, quantum provers almost invariably face the risk of being maliciously exploited by classical verifiers. In fact, through malicious strategies in interaction with quantum provers, classical verifiers could solve some instances of hard problems that arise from the specific scheme in use. In other words, malicious verifiers can break some schemes (that quantum provers are not aware of) through interaction with quantum provers. All this is due to the lack of formalization that prevents malicious verifiers from extracting useful information in proofs of quantumness. To address this issue, we formalize zero-knowledge proofs of quantumness. Intuitively, the zero-knowledge property necessitates that the information gained by the classical verifier from interactions with the quantum prover should not surpass what can be simulated using a simulated classical prover interacting with the same verifier. As a result, the new zero-knowledge notion can prevent any malicious verifier from exploiting quantum advantage. Interestingly, we find that the classical zero-knowledge proof is sufficient to compile some existing proofs of quantumness schemes into zero-knowledge proofs of quantumness schemes. Due to some technical reasons, it appears to be more general to require zero-knowledge proof on the verifier side instead of the prover side. Intuitively, this helps to regulate the verifier's behavior from malicious to be honest-but-curious. As a result, both parties will play not only one role in the proofs of quantumness but also the dual role in the classical zero-knowledge proof. Specifically, the two principle proofs of quantumness schemes: Shor's factoring-based scheme and learning with errors-based scheme in [Brakerski et al, FOCS, 2018], can be transformed into zero-knowledge proofs of quantumness by requiring an extractable non-interactive zero-knowledge argument on the verifier side. Notably, the zero-knowledge proofs of quantumness can be viewed as an enhanced security notion for proofs of quantumness. To prevent malicious verifiers from exploiting the quantum device's capabilities or knowledge, it is advisable to transition existing proofs of quantumness schemes to this framework whenever feasible.
Last updated:  2025-01-22
Simultaneous-Message and Succinct Secure Computation
Elette Boyle, Abhishek Jain, Sacha Servan-Schreiber, and Akshayaram Srinivasan
We put forth and instantiate a new primitive we call simultaneous-message and succinct (SMS) secure computation. An SMS scheme enables a minimal communication pattern for secure computation in the following scenario: Alice has a large private input X, Bob has a small private input y, and Charlie wants to learn $f(X, y)$ for some public function $f$. Given a common reference string (CRS) setup phase, an SMS scheme for a function f is instantiated with two parties holding inputs $X$ and $y$, and has the following structure: - The parties simultaneously exchange a single message. - Communication is succinct, scaling sublinearly in the size of $X$ and the output $f(X, y)$. - Without further interaction, the parties can locally derive additive secret shares of $f(X, y)$. In this way, an SMS scheme incurs a communication cost that is only twice that of the function output length. Importantly, the size of Alice's message does not grow with the size of her input $X$, and both Alice's and Bob's first-round messages grow sublinearly in the size of the output. Additionally, Alice's or Bob's view provides no information about the other party's input besides the output of $f(X, y)$, even if colluding with Charlie. We obtain the following results: - Assuming Learning With Errors (LWE), we build an SMS scheme supporting evaluation of depth-$d$ circuits, where Alice's message is of size $|f(X, y)|^{(2/3)}$· poly(λ, d), Bob's message is of size $(|y| + |f(X, y)|^{(2/3)})$ · poly(λ, d), and λ is the security parameter. We can further extend this to support all functions by assuming the circular security of LWE. - Assuming sub-exponentially secure indistinguishability obfuscation, in conjunction with other standard assumptions, we build an SMS scheme supporting arbitrary polynomial-sized batch functions of the form $(f(x_1, y), ..., f(x_L, y))$, for $X = (x_1, ..., x_L)$. The size of Alice's and Bob's messages in this construction is poly(λ) and poly(λ, |f|, log L), respectively. We show that SMS schemes have several immediate applications. An SMS scheme gives: - A direct construction of trapdoor hash functions (TDH) (Döttling et al., Crypto'19) for the same class of functions as the one supported by the SMS scheme. - A simple and generic compiler for obtaining compact, rate-1 fully homomorphic encryption (FHE) from any non-compact FHE scheme. - A simple and generic compiler for obtaining correlation-intractable (CI) hash functions that are secure against all efficiently-searchable relations. In turn, under the LWE assumption, we obtain the first construction of TDH for all functions and generic approaches for obtaining rate-1 FHE and CI hashing. We also show that our iO-based construction gives an alternative approach for two-round secure computation with communication succinctness in the output length (Hubáček and Wichs, ITCS'15).
Last updated:  2025-01-22
Adaptive Hardcore Bit and Quantum Key Leasing over Classical Channel from LWE with Polynomial Modulus
Duong Hieu Phan, Weiqiang Wen, Xingyu Yan, and Jinwei Zheng
Quantum key leasing, also known as public key encryption with secure key leasing (PKE-SKL), allows a user to lease a (quantum) secret key to a server for decryption purpose, with the capability of revoking the key afterwards. In the pioneering work by Chardouvelis et al (arXiv:2310.14328), a PKE-SKL scheme utilizing classical channels was successfully built upon the noisy trapdoor claw-free (NTCF) family. This approach, however, relies on the superpolynomial hardness of learning with errors (LWE) problem, which could affect both efficiency and security of the scheme. In our work, we demonstrate that the reliance on superpolynomial hardness is unnecessary, and that LWE with polynomial-size modulus is sufficient to achieve the same goal. Our approach enhances both efficiency and security, thereby improving the practical feasibility of the scheme on near-term quantum devices. To accomplish this, we first construct a \textit{noticeable} NTCF (NNTCF) family with the adaptive hardcore bit property, based on LWE with polynomial-size modulus. To the best of our knowledge, this is the first demonstration of the adaptive hardcore bit property based on LWE with polynomial-size modulus, which may be of independent interest. Building on this foundation, we address additional challenges in prior work to construct the first PKE-SKL scheme satisfying the following properties: (\textit{i}) the entire protocol utilizes only classical communication, and can also be lifted to support homomorphism. (\textit{ii}) the security is solely based on LWE assumption with polynomial-size modulus. As a demonstration of the versatility of our noticeable NTCF, we show that an efficient proof of quantumness protocol can be built upon it. Specifically, our protocol enables a classical verifier to test the quantumness while relying exclusively on the LWE assumption with polynomial-size modulus.
Last updated:  2025-01-22
Fast, private and regulated payments in asynchronous networks
Maxence Brugeres, Victor Languille, Petr Kuznetsov, and Hamza Zarfaoui
We propose a decentralized asset-transfer system that enjoys full privacy: no party can learn the details of a transaction, except for its issuer and its recipient. Furthermore, the recipient is only aware of the amount of the transaction. Our system does not rely on consensus or synchrony assumptions, and therefore, it is responsive, since it runs at the actual network speed. Under the hood, every transaction creates a consumable coin equipped with a non-interactive zero-knowledge proof (NIZK) that confirms that the issuer has sufficient funds without revealing any information about her identity, the recipient's identity, or the payment amount. Moreover, we equip our system with a regulatory enforcement mechanism that can be used to regulate transfer limits or restrict specific addresses from sending or receiving funds, while preserving the system's privacy guarantees. Finally, we report on Paxpay, our implementation of Fully Private Asset Transfer (FPAT) that uses the Gnark library for the NIZKs. In our benchmark, Paxpay exhibits better performance than earlier proposals that either ensure only partial privacy, require some kind of network synchrony or do not implement regulation features. Our system thus reconciles privacy, responsiveness, regulation enforcement and performance.
Last updated:  2025-01-22
Non-Interactive Threshold BBS+ From Pseudorandom Correlations
Sebastian Faust, Carmit Hazay, David Kretzler, Leandro Rometsch, and Benjamin Schlosser
The BBS+ signature scheme is one of the most prominent solutions for realizing anonymous credentials. Its prominence is due to properties like selective disclosure and efficient protocols for creating and showing possession of credentials. Traditionally, a single credential issuer produces BBS+ signatures, which poses significant risks due to a single point of failure. In this work, we address this threat via a novel $t$-out-of-$n$ threshold BBS+ protocol. Our protocol supports an arbitrary security threshold $t \leq n$ and works in the so-called preprocessing setting. In this setting, we achieve non-interactive signing in the online phase and sublinear communication complexity in the number of signatures in the offline phase, which, as we show in this work, are important features from a practical point of view. As it stands today, none of the widely studied signature schemes, such as threshold ECDSA and threshold Schnorr, achieve both properties simultaneously. In this work, we make the observation that presignatures can be directly computed from pseudorandom correlations which allows servers to create signatures shares without additional cross-server communication. Both our offline and online protocols are actively secure in the Universal Composability model. Finally, we evaluate the concrete efficiency of our protocol, including an implementation of the online phase and the expansion algorithm of the pseudorandom correlation generator (PCG) used during the offline phase. The online protocol without network latency takes less than $14 ms$ for $t \leq 30$ and credentials sizes up to $10$. Further, our results indicate that the influence of $t$ on the online signing is insignificant, $\leq 6 \%$ for $t \leq 30$, and the overhead of the thresholdization occurs almost exclusively in the offline phase. Our implementation of the PCG expansion shows that even for a committee size of $10$ servers, each server can expand a correlation of up to $2^{17}$ presignatures in less than $100$ ms per presignature.
Last updated:  2025-01-22
Available Attestation: Towards a Reorg-Resilient Solution for Ethereum Proof-of-Stake
Mingfei Zhang, Rujia Li, Xueqian Lu, and Sisi Duan
Ethereum transitioned from Proof-of-Work consensus to Proof-of-Stake (PoS) consensus in September 2022. While this upgrade brings significant improvements (e.g., lower energy costs and higher throughput), it also introduces new vulnerabilities. One notable example is the so-called malicious \textit{reorganization attack}. Malicious reorganization denotes an attack in which the Byzantine faulty validators intentionally manipulate the canonical chain so the blocks by honest validators are discarded. By doing so, the faulty validators can gain benefits such as higher rewards, lower chain quality, or even posing a liveness threat to the system. In this work, we show that the majority of the known attacks on Ethereum PoS are some form of reorganization attacks. In practice, most of these attacks can be launched even if the network is synchronous (there exists a known upper bound for message transmission and processing). Different from existing studies that mitigate the attacks in an ad-hoc way, we take a systematic approach and provide an elegant yet efficient solution to reorganization attacks. Our solution is provably secure such that no reorganization attacks can be launched in a synchronous network. In a partially synchronous network, our approach achieves the conventional safety and liveness properties of the consensus protocol. Our evaluation results show that our solution is resilient to five types of reorganization attacks and also highly efficient.
Last updated:  2025-01-21
A Survey on Transciphering and Symmetric Ciphers for Homomorphic Encryption
Indranil Thakur, Angshuman Karmakar, Chaoyun Li, and Bart Preneel
Data privacy concerns are sharply rising in the current digital era, hyperdriven by cloud computing, big data analytics, and the Internet of Things. Homomorphic Encryption (HE) has emerged as an ideal technique for computing on encrypted data, but current schemes suffer from slow encryption speed and large ciphertext expansion. Practical implementation is hindered, especially when the client has limited bandwidth, memory, and computing power. In 2011, Naehrig et al. proposed transciphering, reducing computational and communication overload on the client side. This involves symmetric ciphers with minimized multiplicative complexity, referred to as HE-Friendly Ciphers (HEFCs). In this work, we present a detailed study of transciphering for HE by systematizing existing knowledge and crystallizing research challenges. Particularly we conduct a comprehensive study on state-of-the-art HEFC constructions. Our work highlights gaps, open problems, and directions for future research.
Last updated:  2025-01-21
PARScoin: A Privacy-preserving, Auditable, and Regulation-friendly Stablecoin
Amirreza Sarencheh, Aggelos Kiayias, and Markulf Kohlweiss
Stablecoins are digital assets designed to maintain a consistent value relative to a reference point, serving as a vital component in Blockchain and Decentralized Finance (DeFi) ecosystems. Typical implementations of stablecoins via smart contracts come with important downsides such as a questionable level of privacy, potentially high fees, and lack of scalability. We put forth a new design, PARScoin, for a Privacy-preserving, Auditable, and Regulation-friendly Stablecoin that mitigates these issues while enabling high performance both in terms of speed of settlement and scalability. PARScoin enables privacy-preserving transactions where user identities, transaction values, and links to past transactions are hidden. Our system ensures auditability and regulation-friendliness without compromising privacy. Transfers do not need to be settled on blockchains, allowing for blockchain-independent fees. The system is blockchain-agnostic and interoperable with multiple blockchains via standard smart contract-based withdrawals. Our construction is distributed using threshold cryptography, avoiding single point of trust and failure, and is analyzed in the Universal Composition (UC) framework for secure integration with other systems. PARScoin supports self-custody, avoiding restrictions like address blacklisting or coin destruction, and demonstrates real-world efficiency based on our performance analysis.
Last updated:  2025-01-21
poqeth: Efficient, post-quantum signature verification on Ethereum
Ruslan Kysil, István András Seres, Péter Kutas, and Nándor Kelecsényi
This work explores the application and efficient deployment of (standardized) post-quantum (PQ) digital signature algorithms in the blockchain environment. Specifically, we implement and evaluate four PQ signatures in the Ethereum Virtual Machine: W-OTS$^{+}$, XMSS, SPHINCS+, and MAYO. We focus on optimizing the gas costs of the verification algorithms as that is the signature schemes' only algorithm executed on-chain, thus incurring financial costs (transaction fees) for the users. Hence, the verification algorithm is the signature schemes' main bottleneck for decentralized applications. We examine two methods to verify post-quantum digital signatures on-chain. Our practical performance evaluation shows that full on-chain verification is often prohibitively costly. Naysayer proofs (FC'24) allow a novel optimistic verification mode. We observe that the Naysayer verification mode is generally the cheapest, at the cost of additional trust assumptions. We release our implementation called poqeth as an open-source library.
Last updated:  2025-01-21
Scalable Two-Round $n$-out-of-$n$ and Multi-Signatures from Lattices in the Quantum Random Oracle Model
Qiqi Lai, Feng-Hao Liu, Yang Lu, Haiyang Xue, and Yong Yu
In this paper, we construct the first asymptotically efficient two-round $n$-out-of-$n$ and multi-signature schemes from lattices in the quantum random oracle model (QROM), using the Fiat-Shamir with Aborts (FSwA) paradigm. Our protocols can be viewed as the QROM~variants of the two-round protocols by Damgård et al. (JoC 2022). A notable feature of our protocol, compared to other counterparts in the classical random oracle model, is that each party performs an independent abort and still outputs a signature in exactly two rounds, making our schemes significantly more scalable. From a technical perspective, the simulation of QROM~and the efficient reduction from breaking underlying assumption to forging signatures are the essential challenges to achieving efficient QROM security for the previously related works. In order to conquer the former one we adopt the quantum-accessible pseudorandom function (QPRF) to simulate QROM. Particularly, we show that there exist a QPRF~which can be programmed and inverted, even against a quantum adversary. For the latter challenge, we tweak and apply the online extractability by Unruh (Eurocrypt 2015).
Last updated:  2025-01-21
AGATE: Augmented Global Attested Trusted Execution in the Universal Composability framework
Lorenzo Martinico and Markulf Kohlweiss
A Trusted Execution Environment (TEE) is a security technology, implemented by CPU manufacturers, which guarantees integrity and confidentiality on a restricted execution environment to any remote verifier through attestation. TEEs are deployed on various consumer and commercial hardware platforms, and have been widely adopted as a component in the design of cryptographic protocols both theoretical and practical. Within the provable security community, the use of TEEs as a setup assumption has converged to a standard ideal definition in the Universal Composability setting ($G_{\mathsf{att}}$, defined by Pass et al., Eurocrypt '17). However, it is unclear whether any real TEE design can actually realise such a level of security, or whether the diverse capabilities of today's TEE implementations will in fact converge to a single standard. Therefore, it is necessary for cryptographers and protocol designers to specify what assumptions are necessary for the TEE they are using to support the correctness and security of their protocol. To this end, this paper provides a more careful treatment of trusted execution than the existing literature, focusing on the capabilities of enclaves and adversaries. Our goal is to provide meaningful patterns for comparing different classes of TEEs, particularly how a weaker TEE functionality can implement a stronger one given an appropriate mechanism to bridge the two. We introduce a new, "modular" definition of TEEs that captures a broad range of pre-existing functionalities defined in the literature while maintaining their high level of abstraction. While our goal is not directly to model implementations of specific commercial TEE providers, our modular definition provides a way to capture more meaningful and realistic hardware capabilities. We propose to characterise TEE capabilities along the following terms: - the set of trusted features available to the enclave; - the set of possible attacks on an enclave; - the content of attestation signatures. We then define various possible ideal modular $G_{\mathsf{att}}$ functionality instantiations that capture existing variants in the literature. Finally, we conclude the paper by constructing a protocol template to realise stronger $G_{\mathsf{att}}$ setups from weaker ones, and provide an example of removing an attack.
Last updated:  2025-01-21
Friendly primes for efficient modular arithmetic using the Polynomial Modular Number System
Fangan Yssouf Dosso, Nadia El Mrabet, Nicolas Méloni, François Palma, and Pascal Véron
The Polynomial Modular Number System (PMNS) is a non-positional number system designed for modular arithmetic. Its efficiency, both in software and hardware, has been demonstrated for integers commonly used in Elliptic Curve Cryptography. In recent papers, some authors introduce specific prime forms that are particularly well-suited for PMNS arithmetic. In this work, we extend their results to a broader class of prime numbers. In practice, our approach yields performance that is competitive with, and in some cases superior to, Pseudo-Mersenne arithmetic. As a result, we expand the set of prime numbers that are well-suited for modular arithmetic. Furthermore, we contribute a database of proof of concept Elliptic Curves constructed with those primes that verify the Brainpool Standard.
Last updated:  2025-01-21
Efficient Multi-party Private Set Union Resistant to Maximum Collusion Attacks
Qiang Liu and Joon-Woo Lee
Multi-party Private Set Union (MPSU) enables multiple participants to jointly compute the union of their private sets without leaking any additional information beyond the resulting union. Liu et al. (ASIACRYPT 2023) proposed the first scalable MPSU protocol fully based on symmetric key encryption (SKE), which designates one participant as the "leader" responsible for obtaining the final union. However, the protocol assumes that the leader does not collude with other participants, which weakens its practicality. In this work, we design a scalable MPSU protocol, $\Pi_\text{MPSU}^\text{one-leader}$, which tolerates maximum collusion. The protocol relies primarily on SKE supplemented with additive homomorphic encryption (AHE), with the designated leader obtaining the union result. Furthermore, to address the issue of fairness in scenarios where obtaining the result early provides an advantage, we extend $\Pi_\text{MPSU}^\text{one-leader}$ and propose a protocol that allows all participants to receive the union result simultaneously, called $\Pi_\text{MPSU}^\text{leaderless}$. We implement our proposed schemes and conduct a comprehensive comparison with state-of-the-art solution (Gao et al. PETS 2024) that resist maximum collusion. The results demonstrate that, while our protocol only offers a moderate improvement in runtime, it significantly reduces communication cost by approximately $4$ to $5$ times.
Last updated:  2025-01-21
Cauchyproofs: Batch-Updatable Vector Commitment with Easy Aggregation and Application to Stateless Blockchains
Zhongtang Luo, Yanxue Jia, Alejandra Victoria Ospina Gracia, and Aniket Kate
Stateless blockchain designs have emerged to address the challenge of growing blockchain size using succinct global states. Previous works have developed vector commitments that support proof updates and aggregation to be used as such states. However, maintaining proofs for multiple users still demands significant computational resources, particularly to update proofs with every transaction. This paper introduces Cauchyproofs, a batch-updatable vector commitment that enables proof-serving nodes to efficiently update proofs in quasi-linear time relative to the number of users and transactions, utilizing an optimized KZG scheme to achieve complexity $O\left(\left(\left|\vec{\alpha}\right| + \left|\vec{\beta}\right|\right) \log^2 (\left|\vec{\alpha}\right| + \left|\vec{\beta}\right|)\right)$ for $\left|\alpha\right|$ users and $\left|\beta\right|$ transactions, compared to the previous $O\left(\left|\vec{\alpha}\right|\cdot|\vec{\beta}|\right)$ approaches. This advancement reduces the computational burden on proof-serving nodes, allowing efficient proof maintenance across large user groups. We demonstrate that our approach is approximately five times faster than the naive approach at the Ethereum-level block size. Additionally, we present a novel matrix representation for KZG proofs utilizing Cauchy matrices, enabling faster all-proof computations with reduced elliptic curve operations. Finally, we propose an algorithm for history proof query, supporting retrospective proof generation with high efficiency. Our contributions substantially enhance the scalability and practicality of proof-serving nodes in stateless blockchain frameworks.
Last updated:  2025-01-20
Learning With Quantization: A Ciphertext Efficient Lattice Problem with Tight Security Reduction from LWE
Shanxiang Lyu, Ling Liu, and Cong Ling
In this paper, we introduce Learning With Quantization (LWQ), a new problem related to the Learning With Errors (LWE) and Learning With Rounding (LWR) problems. LWQ provides a tight security reduction from LWE while enabling efficient ciphertext compression comparable to that of LWR. We adopt polar lattices to instantiate the quantizer of LWQ. Polar lattices are a specific instance of the classical Construction D, which utilizes a set of nested polar codes as component codes. Due to the polarization phenomenon of polar codes, the distribution of the quantization error converges to a discrete Gaussian. Moreover, the quantization algorithm can be executed in polynomial time. Our main result establishes a security reduction from LWE to LWQ, ensuring that the LWQ distribution remains computationally indistinguishable from the uniform distribution. The technical novelty lies in bypassing the noise merging principle often seen in the security reduction of LWR, instead employing a more efficient noise matching principle. We show that the compression rate is ultimately determined by the capacity of the ``LWE channel'', which can be adjusted flexibly. Additionally, we propose a high-information-rate encryption framework based on LWQ, demonstrating its advantage over constructions based on LWE and quantized LWE.
Last updated:  2025-01-20
ICT: Insured Cryptocurrency Transactions
Aydin Abadi, Amirreza Sarencheh, Henry Skeoch, and Thomas Zacharias
Cryptocurrencies have emerged as a critical medium for digital financial transactions, driving widespread adoption while simultaneously exposing users to escalating fraud risks. The irreversible nature of cryptocurrency transactions, combined with the absence of consumer protection mechanisms, leaves users vulnerable to substantial financial losses and emotional distress. To address these vulnerabilities, we introduce Insured Cryptocurrency Transactions (ICT), a novel decentralized insurance framework designed to ensure financial recovery for honest users affected by fraudulent cryptocurrency transactions. We rigorously formalize the ICT framework, establishing strong security guarantees to protect against malicious adversaries. Furthermore, we present Insured Cryptocurrency Exchange (ICE), a concrete instantiation of ICT tailored for centralized cryptocurrency exchanges. ICE relies primarily on a standard smart contract and provides a robust mechanism to compensate users in cases of security breaches, insolvency, or fraudulent activities affecting the exchange. We have implemented ICE’s smart contract and evaluated its on-chain costs. The evaluation results demonstrate ICE’s low operational overhead. To our knowledge, ICT and ICE represent the first formal approaches to decentralized insurance frameworks in the cryptocurrency domain.
Last updated:  2025-01-20
Non Linearizable Entropic Operator
Daniel Nager
In [Pan21] a linearization attack is proposed in order to break the cryp- tosystem proposed in [Gli21]. We want to propose here a non-linearizable operator that disables this attack as this operator doesn't give raise to a quasigrup and doesn't obey the latin square property.
Last updated:  2025-01-20
An Extended Hierarchy of Security Notions for Threshold Signature Schemes and Automated Analysis of Protocols That Use Them
Cas Cremers, Aleksi Peltonen, and Mang Zhao
Despite decades of work on threshold signature schemes, there is still limited agreement on their desired properties and threat models. In this work we significantly extend and repair previous work to give a unified syntax for threshold signature schemes and a new hierarchy of security notions for them. Moreover, our new hierarchy allows us to develop an automated analysis approach for protocols that use threshold signatures, which can discover attacks on protocols that exploit the details of the security notion offered by the used scheme, which can help choose the correct security notion (and scheme that fulfills it) that is required for a specific protocol. Unlike prior work, our syntax for threshold signatures covers both non-interactive and interactive signature schemes with any number of key generation and signing rounds, and our hierarchy of security notions additionally includes elements such as various types of corruption and malicious key generation. We show the applicability of our hierarchy by selecting representative threshold signature schemes from the literature, extracting their core security features, and categorizing them according to our hierarchy. As a side effect of our work, we show through a counterexample that a previous attempt at building a unified hierarchy of unforgeability notions does not meet its claimed ordering, and show how to repair it without further restricting the scope of the definitions. Based on our syntax and hierarchy, we develop the first systematic, automated analysis method for higher-level protocols that use threshold signatures. We use a symbolic analysis framework to abstractly model threshold signature schemes that meet security notions in our hierarchy, and implement this in the Tamarin prover. Given a higher-level protocol that uses threshold signatures, and a security notion from our hierarchy, our automated approach can find attacks on such protocols that exploit the subtle differences between elements of our hierarchy. Our approach can be used to formally analyze the security implications of implementing different threshold signature schemes in higher-level protocols.
Last updated:  2025-01-20
Artificial Results From Hardware Synthesis
Ahmed Alharbi and Charles Bouillaguet
In this paper, we revisit venerable lower-bounds on the $AT$ or $AT^2$ performance metric of hardware circuits. A series of works started in the late 1970's has established that if a hardware circuit of area $A$ computes a function $f : \{0, 1\}^n \rightarrow \{0, 1\}^m$ in $T$ clock cycles, then $AT^2$ is asymptotically larger than (a form of) the communication complexity of $f$. These lower-bounds ignore the active component of the circuit such as the logic gates and only take into account the area of the wiring. However, it seems that it is common practice to report the performance characteristics of hardware designs after synthesis, namely after having ``compiled'' the design into the topological description of a hardware circuit made of standard cells. The area of the cells can be be determined with certainty, whereas the area occupied by the wires cannot. This may leads to optimistic performance figures, that may even violate the old lower-bounds. In this paper, we take the case of the Möbius transform as a case study, following the work of Banik and Regazzoni in TCHES, 2024(2) who presented hardware designs that implement it. We first determine the communication complexity of the Möbius transform. Then, following the old methodology, we derive lower-bounds on the area (in $\mu m^2$) of any circuit that implement the operation using several open Process Design Kits for ASIC production. For large enough instances, the wires provably occupy more area than the logic gates themselves. This invalidate previous theoretical claims about the performance of circuits implementing the Möbius transform. Fundamentally, the root cause of the contradiction between ``VLSI-era'' lower-bounds and current performance claims is that the lower-bounds apply to a geometric description of the circuit where the length of wiring is known, while it is common to report performance results on the basis of hardware synthesis alone, where a topological description of the circuit has been obtained but the actual length of wires is unknown.
Last updated:  2025-01-20
An Improved Algorithm for Code Equivalence
Julian Nowakowski
We study the linear code equivalence problem (LEP) for linear $[n,k]$-codes over finite fields $\mathbb{F}_q$. Recently, Chou, Persichetti and Santini gave an elegant algorithm that solves LEP over large finite fields (with $q = \Omega(n)$) in time $2^{\frac{1}{2}\operatorname{H}\left(\frac{k}{n}\right)n}$, where $\operatorname{H}(\cdot)$ denotes the binary entropy function. However, for small finite fields, their algorithm can be significantly slower. In particular, for fields of constant size $q = \mathcal{O}(1)$, its runtime increases by an exponential factor in $n$. We present an improved version of their algorithm, which achieves the desired runtime of $2^{\frac{1}{2}\operatorname{H}\left(\frac{k}{n}\right)n}$ for all finite fields of size $q \geq 7$. For a wide range of parameters, this improves over the runtime of all previously known algorithms by an exponential factor.
Last updated:  2025-01-20
Secret-Sharing Schemes for High Slices
Amos Beimel, Oriol Farràs, Or Lasri, and Oded Nir
In a secret-sharing scheme, a secret is shared among $n$ parties such that the secret can be recovered by authorized coalitions, while it should be kept hidden from unauthorized coalitions. In this work we study secret-sharing for $k$-slice access structures, in which coalitions of size $k$ are either authorized or not, larger coalitions are authorized and smaller are unauthorized. Known schemes for these access structures had smaller shares for small $k$'s than for large ones; hence our focus is on "high" $(n-k)$-slices where $k$ is small. Our work is inspired by several motivations: 1) Obtaining efficient schemes (with perfect or computational security) for natural families of access structures; 2) Making progress in the search for better schemes for general access structures, which are often based on schemes for slice access structures; 3) Proving or disproving the conjecture by Csirmaz (J. Math. Cryptol., 2020) that an access structures and its dual can be realized by secret-sharing schemes with the same share size. The main results of this work are: - Perfect schemes for high slices. We present a scheme for $(n-k)$-slices with information-theoretic security and share size $kn\cdot 2^{\tilde{O}(\sqrt{k \log n})}$. Using a different scheme with slightly larger shares, we prove that the ratio between the optimal share size of $k$-slices and that of their dual $(n-k)$-slices is bounded by $n$. - Computational schemes for high slices. We present a scheme for $(n-k)$-slices with computational security and share size $O(k^2 \lambda \log n)$ based on the existence of one-way functions. Our scheme makes use of a non-standard view point on Shamir secret-sharing that allows to share many secrets with different thresholds with low cost. - Multislice access structures. $(a:b)$-multislices are access structures that behave similarly to slices, but are unconstrained on coalitions in a wider range of cardinalities between $a$ and $b$. We use our new schemes for high slices to realize multislices with the same share sizes that their duals have today. This solves an open question raised by Applebaum and Nir (Crypto, 2021), and allows to realize hypergraph access structures that are chosen uniformly at random under a natural set of distributions with share size $2^{0.491n+o(n)}$ compared to the previous result of $2^{0.5n+o(n)}$.
Last updated:  2025-01-20
Conditional Constant Function Problem and Its Quantum Solutions: Attacking Feistel Ciphers
Zhenqiang Li, Shuqin Fan, Fei Gao, Yonglin Hao, Xichao Hu, Linchun Wan, Hongwei Sun, and Qi Su
In this paper, we define the conditional constant function problem (CCFP) and, for a special case of CCFP, we propose a quantum algorithm for solving it efficiently. Such an algorithm enables us to make new evaluations to the quantum security of Feistel block cipher in the case where a quantum adversary only has the ability to make online queries in a classical manner, which is relatively realistic. Specifically, we significantly improved the chosen-plaintext key recovery attacks on two Feistel block cipher variants known as Feistel-KF and Feistel-FK. For Feistel-KF, we construct a 3-round distinguisher based on the special case of CCFP and propose key recovery attacks mounting to $r>3$ rounds. For Feistel-FK, our CCFP based distinguisher covers 4 rounds and the key recovery attacks are applicable for $r>4$ rounds. Utilizing our CCFP solving algorithm, we are able to reduce the classical memory complexity of our key recovery attacks from the previous exponential $O(2^{cn})$ to $O(1)$. The query complexity of our key recovery attacks on Feistel-KF is also significantly reduced from $O(2^{cn})$ to $O(1)$ where $c$'s are constants. Our key recovery results enjoy the current optimal complexities. They also indicate that quantum algorithms solving CCFP could be more promising than those solving the period finding problem.
Last updated:  2025-01-20
Enhancing Threshold Group Action Signature Schemes: Adaptive Security and Scalability Improvements
Michele Battagliola, Giacomo Borin, Giovanni Di Crescenzo, Alessio Meneghetti, and Edoardo Persichetti
Designing post-quantum digital signatures is a very active research area at present, with several protocols being developed, based on a variety of mathematical assumptions. Many of these signatures schemes can be used as a basis to define more advanced schemes, such as ring or threshold signatures, where multiple parties are involved in the signing process. Unfortunately, the majority of these protocols only considers a static adversary, that must declare which parties to corrupt at the beginning of the execution. However, a stronger security notion can be achieved, namely security against adaptive adversaries, that can corrupt parties at any times. In this paper we tackle the challenges of designing a post-quantum adap- tively secure threshold signature scheme: starting from the GRASS sig- nature scheme, which is only static secure, we show that it is possible to turn it into an adaptive secure threshold signature that we call GRASS+. In particular, we introduce two variants of the classical GAIP problem and discuss their security. We prove that our protocol is adaptively secure in the Random Oracle Model, if the adversary corrupts only t 2 parties. We are also able to prove that GRASS+ achieves full adaptive security, with a corruption threshold of t, in the Black Box Group Action Model with Random Oracle. Finally, we improve the performance of the scheme by exploiting a better secret sharing, inspired from the work of Desmedt, Di Crescenzo, and Burmester from ASIACRYPT’94.
Last updated:  2025-01-20
New Quantum Cryptanalysis of Binary Elliptic Curves (Extended Version)
Kyungbae Jang, Vikas Srivastava, Anubhab Baksi, Santanu Sarkar, and Hwajeong Seo
This paper improves upon the quantum circuits required for the Shor's attack on binary elliptic curves. We present two types of quantum point addition, taking both qubit count and circuit depth into consideration. In summary, we propose an in-place point addition that improves upon the work of Banegas et al. from CHES'21, reducing the qubit count – depth product by more than $73\%$ – $81\%$ depending on the variant. Furthermore, we develop an out-of-place point addition by using additional qubits. This method achieves the lowest circuit depth and offers an improvement of over $92\%$ in the qubit count – quantum depth product (for a single step). To the best of our knowledge, our work improves from all previous works (including the CHES'21 paper by Banegas et al., the IEEE Access'22 paper by Putranto et al., and the CT-RSA'23 paper by Taguchi and Takayasu) in terms of circuit depth and qubit count – depth product. Equipped with the implementations, we discuss the post-quantum security of the binary elliptic curve cryptography. Under the MAXDEPTH metric (proposed by the US government's NIST), the quantum circuit with the highest depth in our work is $2^{24}$, which is significantly lower than the MAXDEPTH limit of $2^{40}$. For the gate count – full depth product, a metric for estimating quantum attack cost (proposed by NIST), the highest complexity in our work is $2^{60}$ for the curve having degree 571 (which is comparable to AES-256 in terms of classical security), considerably below the post-quantum security level 1 threshold (of the order of $2^{156}$).
Last updated:  2025-01-20
Arbitrary-Threshold Fully Homomorphic Encryption with Lower Complexity
Yijia Chang and Songze Li
Threshold fully homomorphic encryption (ThFHE) enables multiple parties to compute functions over their sensitive data without leaking data privacy. Most of existing ThFHE schemes are restricted to full threshold and require the participation of all parties to output computing results. Compared with these full-threshold schemes, arbitrary threshold (ATh)-FHE schemes are robust to non-participants and can be a promising solution to many real-world applications. However, existing AThFHE schemes are either inefficient to be applied with a large number of parties $N$ and a large data size $K$, or insufficient to tolerate all types of non-participants. In this paper, we propose an AThFHE scheme to handle all types of non-participants with lower complexity over existing schemes. At the core of our scheme is the reduction from AThFHE construction to the design of a new primitive called approximate secret sharing (ApproxSS). Particularly, we formulate ApproxSS and prove the correctness and security of AThFHE on top of arbitrary-threshold (ATh)-ApproxSS's properties. Such a reduction reveals that existing AThFHE schemes implicitly design ATh-ApproxSS following a similar idea called ``noisy share''. Nonetheless, their ATh-ApproxSS design has high complexity and become the performance bottleneck. By developing ATASSES, an ATh-ApproxSS scheme based on a novel ``encrypted share'' idea, we reduce the computation (resp. communication) complexity from $\mathcal{O}(N^2K)$ to $\mathcal{O}(N^2+K)$ (resp. from $\mathcal{O}(NK)$ to $\mathcal{O}(N+K)$). We not only theoretically prove the (approximate) correctness and security of ATASSES, but also empirically evaluate its efficiency against existing baselines. Particularly, when applying to a system with one thousand parties, ATASSES achieves a speedup of $3.83\times$ -- $15.4\times$ over baselines.
Last updated:  2025-01-20
Recover from Excessive Faults in Partially-Synchronous BFT SMR
Tiantian Gong, Gustavo Franco Camilo, Kartik Nayak, Andrew Lewis-Pye, and Aniket Kate
Byzantine fault-tolerant (BFT) state machine replication (SMR) protocols form the basis of modern blockchains as they maintain a consistent state across all blockchain nodes while tolerating a bounded number of Byzantine faults. We analyze BFT SMR in the excessive fault setting where the actual number of Byzantine faults surpasses a protocol's tolerance. We start by devising the very first repair algorithm for linearly chained and quorum-based partially synchronous SMR to recover from faulty states caused by excessive faults. Such a procedure can be realized using any commission fault detection module -- an algorithm that identifies the faulty replicas without falsely locating any correct replica. We achieve this with a slightly weaker liveness guarantee, as the original security notion is impossible to satisfy given excessive faults. We implement recoverable HotStuff in Rust. The throughput resumes to the normal level (without excessive faults) after recovery routines terminate for $7$ replicas and is slightly reduced by $\leq 4.3\%$ for $30$ replicas. On average, it increases the latency by $12.87\%$ for $7$ replicas \usenix{and $8.85\%$ for $30$ replicas}. Aside from adopting existing detection modules, we also establish the sufficient condition for a general BFT SMR protocol to allow for complete and sound fault detection when up to $(n-2)$ Byzantine replicas (out of $n$ total replicas) attack safety. We start by providing the first closed-box fault detection algorithm for any SMR protocol without any extra rounds of communication. We then describe open-box instantiations of our fault detection routines in Tendermint and Hotstuff, further reducing the overhead, both asymptotically and concretely.
Last updated:  2025-01-19
DSKE: Digital Signatures with Key Extraction
Zhipeng Wang, Orestis Alpos, Alireza Kavousi, Harry W. H. Wong, Sze Yiu Chau, Duc V. Le, and Christian Cachin
This work introduces DSKE, digital signatures with key extraction. In a DSKE scheme, the private key can be extracted if more than a threshold of signatures on different messages are ever created while, within the threshold, each signature continues to authenticate the signed message. We give a formal definition of DSKE, as well as two provably secure constructions, one from hash-based digital signatures and one from polynomial commitments. We demonstrate that DSKE is useful for various applications, such as spam prevention and deniability. First, we introduce the GroupForge signature scheme, leveraging DSKE constructions to achieve deniability in digital communication. GroupForge integrates DSKE with a Merkle tree and timestamps to produce a ``short-lived'' signature equipped with extractable sets, ensuring deniability under a fixed public key. We illustrate that GroupForge can serve as a viable alternative to Keyforge in the non-attributable email protocol of Specter, Park, and Green (USENIX Sec '21), thereby eliminating the need for continuous disclosure of outdated private keys. Second, we leverage the inherent extraction property of DSKE to develop a Rate-Limiting Nullifier (RLN) scheme. RLN efficiently identifies and expels spammers once they exceed a predetermined action threshold, thereby jeopardizing their private keys. Moreover, we implement both variants of the DSKE scheme to demonstrate their performance and show it is comparable to existing signature schemes. We also implement GroupForge from the polynomial commitment-based DSKE and illustrate the practicality of our proposed method.
Last updated:  2025-01-19
Integer Commitments, Old and New Tools
Iftach Haitner, Yehuda Lindell, and Nikolaos Makriyannis
This self-contained and detailed tutorial covers RSA-based integer commitments and related protocols. It also presents a new, highly efficient setup protocol for sampling commitment parameters.
Last updated:  2025-01-19
Attestation Proof of Association – provability that attestation keys are bound to the same hardware and person
Eric Verheul
We propose a wallet provider issued attestation called Wallet Trust Evidence (WTE) and three related specific instructions for the European Digital Identity (EUDI) Wallet cryptographic hardware, most notably the generation of a Proof of Association (PoA). These allow the EUDI Wallet providing verifiable assurance to third parties (issuers, relying parties) that attestation private keys are not only bound to conformant cryptographic hardware but also that they are bound to the same such hardware. This allows the EUDI Wallet meeting eIDAS Level of Assurance ``high'' as well as operating in a privacy friendly manner. The instructions specified in this document cater for convenient implementation in all envisioned EUDI Wallet architectures including those based on a GlobalPlatform based Secure Element such as an eID-card or an embedded SIM (eSIM). By their simplicity, the three instructions also allow for convenient Common Criteria certification. This document is a further refinement and cryptographic concretization of the WTE/PoA logic specified in the wallet Architecture and Reference Framework (ARF), which is based on the EPIC-09 result developed in a cooperation between the NI-Scy consortium and the eIDAS expert group. However, the present draft document is meant for discussion only and not approved by the NI-Scy consortium, the eIDAS expert group or Dutch government. This paper concentrates on irrefutable PoAs but also indicates how refutable PoAs can be formed providing plausible deniability which can be beneficial in some use cases. As a side note this paper introduces in an annex the construction of Self Generated Verifiable Pseudonyms (SGVPs). These allow a wallet/user to generate pseudonyms based on information agreed with a relying party, e.g. an URL, and to prove these are correctly formed. Together with the proof of association this allows cryptographically binding (disclosed parts of) attestations with these pseudonyms. This enables various use cases such as an employee representing an organisation in a privacy friendly way using an chamber of commerce attestation cryptographically bound to a separate SGVP-pseudonym. Such functionality currently forms the privacy basis of the Dutch eRecognition scheme (eherkenning.nl).
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.