TY - GEN
T1 - Worst and Average Case Hardness of Decoding via Smoothing Bounds
AU - Debris-Alazard, Thomas
AU - Resch, Nicolas
N1 - Publisher Copyright:
© International Association for Cryptologic Research 2025.
PY - 2025/1/1
Y1 - 2025/1/1
N2 - In this work, we consider worst and average case hardness of decoding problems that are the basis for code-based cryptography. By a decoding problem, we refer to problems that take inputs of the form. (G,mG+t) for a matrix. G (which generates a code) and a noise vector. t, and the goal is to recover. m. We consider a natural strategy for creating a reduction to an average case problem: from our input we simulate a Learning Parity with Noise (LPN) oracle, where we recall that LPN is essentially an average-case decoding problem where there is no a priori lower bound on the rate of the code. More formally, the oracle. Ox outputs independent samples of the form. x,a+e, where. a is a uniformly random vector and. e is a noise bit. Such an approach is (implicit in) the previous worst-case to average case reductions for coding problems (Brakerski et al. Eurocrypt 2019, Yu and Zhang CRYPTO 2021). To analyze the effectiveness of this reduction, we use a smoothing bound derived recently by (Debris-Alazard et al. IEEE IT 2023), which quantifies the simulation error of this reduction. It is worth noting that this latter work crucially use a bound, known as the second linear pro gramming bounds, on the weight distribution of the code generated here by. G. Our approach, which is Fourier analytic in nature, applies to any smoothing distribution (so long as it is radial); for our purposes, the best choice appears to be Bernoulli (although for the analysis it is most effec tive to study the uniform distribution over a sphere, and subsequently translate the bound back to the Bernoulli distribution by applying a truncation trick). Our approach works naturally when reducing from a worst-case instance, as well as from an average-case instance. While we are unable to improve the parameters of the worst-case to average-case reductions of Brakerski et al. or Yu and Zhang, we think that our work highlights two important points. Firstly, in analyzing the average-case to average-case reduction we run into inherent limitations of this reduction template. Essentially, it appears hopeless to reduce to an LPN instance for which the noise rate is more than inverse-polynomially biased away from uniform. We furthermore uncover a surprising weakness in the second linear programming bound: we observe that it is essen tially useless for the regime of parameters where the rate of the code is inverse polynomial in the block-length. By highlighting these short comings, we hope to stimulate the development of new techniques for reductions between cryptographic decoding problems.
AB - In this work, we consider worst and average case hardness of decoding problems that are the basis for code-based cryptography. By a decoding problem, we refer to problems that take inputs of the form. (G,mG+t) for a matrix. G (which generates a code) and a noise vector. t, and the goal is to recover. m. We consider a natural strategy for creating a reduction to an average case problem: from our input we simulate a Learning Parity with Noise (LPN) oracle, where we recall that LPN is essentially an average-case decoding problem where there is no a priori lower bound on the rate of the code. More formally, the oracle. Ox outputs independent samples of the form. x,a+e, where. a is a uniformly random vector and. e is a noise bit. Such an approach is (implicit in) the previous worst-case to average case reductions for coding problems (Brakerski et al. Eurocrypt 2019, Yu and Zhang CRYPTO 2021). To analyze the effectiveness of this reduction, we use a smoothing bound derived recently by (Debris-Alazard et al. IEEE IT 2023), which quantifies the simulation error of this reduction. It is worth noting that this latter work crucially use a bound, known as the second linear pro gramming bounds, on the weight distribution of the code generated here by. G. Our approach, which is Fourier analytic in nature, applies to any smoothing distribution (so long as it is radial); for our purposes, the best choice appears to be Bernoulli (although for the analysis it is most effec tive to study the uniform distribution over a sphere, and subsequently translate the bound back to the Bernoulli distribution by applying a truncation trick). Our approach works naturally when reducing from a worst-case instance, as well as from an average-case instance. While we are unable to improve the parameters of the worst-case to average-case reductions of Brakerski et al. or Yu and Zhang, we think that our work highlights two important points. Firstly, in analyzing the average-case to average-case reduction we run into inherent limitations of this reduction template. Essentially, it appears hopeless to reduce to an LPN instance for which the noise rate is more than inverse-polynomially biased away from uniform. We furthermore uncover a surprising weakness in the second linear programming bound: we observe that it is essen tially useless for the regime of parameters where the rate of the code is inverse polynomial in the block-length. By highlighting these short comings, we hope to stimulate the development of new techniques for reductions between cryptographic decoding problems.
UR - https://www.scopus.com/pages/publications/105005943537
U2 - 10.1007/978-3-031-91823-0_12
DO - 10.1007/978-3-031-91823-0_12
M3 - Conference contribution
AN - SCOPUS:105005943537
SN - 9783031918223
T3 - Lecture Notes in Computer Science
SP - 363
EP - 392
BT - Public-Key Cryptography – PKC 2025 - 28th IACR International Conference on Practice and Theory of Public-Key Cryptography, 2025, Proceedings
A2 - Jager, Tibor
A2 - Pan, Jiaxin
PB - Springer Science and Business Media Deutschland GmbH
T2 - 28th IACR International Conference on Practice and Theory of Public Key Cryptography, PKC 2025
Y2 - 12 May 2025 through 15 May 2025
ER -