Post-Quantum Cryptography: Lattice-Based Cryptography and Mathematical Foundations
A comprehensive deep-dive into lattice-based cryptography, the mathematical backbone of post-quantum security. Explore the geometry of lattices, the Learning With Errors problem, and how ML-KEM and ML-DSA achieve quantum resistance.

- The Geometry of Lattices
- Definition and Basic Properties
- Successive Minima and the Shortest Vector
- The Gram-Schmidt Orthogonalization
- Computational Lattice Problems
- The Shortest Vector Problem (SVP)
- The Closest Vector Problem (CVP)
- The Shortest Independent Vectors Problem (SIVP)
- The Learning With Errors Problem
- Problem Definition
- Matrix Formulation
- Regev's Security Reduction
- Parameter Trade-offs
- Structured Lattices: Ring-LWE and Module-LWE
- Ring-LWE
- Module-LWE: The Best of Both Worlds
- Worst-Case to Average-Case Reductions
- Ajtai's Breakthrough (1996)
- Regev's Reduction for LWE (2005)
- Classical Reduction via Peikert
- What the Reductions Mean
- The Number Theoretic Transform
- From FFT to NTT
- NTT Definition
- NTT-Based Polynomial Multiplication
- NTT Domain Representation
- ML-KEM (CRYSTALS-Kyber): In-Depth
- The IND-CPA Encryption Scheme
- The Fujisaki-Okamoto Transform
- ML-KEM Parameter Sets
- ML-DSA (CRYSTALS-Dilithium): In-Depth
- The Fiat-Shamir with Aborts Paradigm
- Dilithium Construction
- The Rejection Sampling Analysis
- ML-DSA Parameter Sets
- Lattice Basis Reduction Algorithms
- The LLL Algorithm
- The BKZ Algorithm
- Known Attack Strategies
- Security Analysis and Parameter Selection
- Security Estimation
- Conservative vs. Aggressive Parameters
- Side-Channel Considerations
- The Falcon Signature Scheme
- NTRU Lattices
- GPV Framework
- Fast Fourier Sampling
- Comparison with ML-DSA
- Conclusion and Series Overview
- Coming in Part 3: Alternative Mathematical Frameworks
- References and Further Reading
- Foundational Papers
- NIST Standards and Reports
- Surveys and Textbooks
- Implementation Resources
- Appendix: Mathematical Reference
- Key Inequalities
- Polynomial Ring Arithmetic
- Discrete Gaussian Distribution
- Complexity Summary
Series: Post-Quantum Cryptography
- Part 1Post-Quantum Cryptography: The Quantum Threat and Cryptographic Foundations
- Part 2Post-Quantum Cryptography: Lattice-Based Cryptography and Mathematical Foundations
- Part 3Post-Quantum Cryptography: Alternative Mathematical Frameworks and Cryptanalytic Lessons
Lattice-based cryptography represents the most mature and widely deployed approach to post-quantum security. NIST's primary standardized algorithms, ML-KEM (FIPS 203) and ML-DSA (FIPS 204), are both built on the mathematical hardness of lattice problems, specifically the Module Learning With Errors (MLWE) problem.
In Part 1 of this series, we established the quantum threat to classical cryptography: Shor's algorithm breaks RSA and ECC in polynomial time, while Grover's algorithm halves security margins for symmetric schemes. We identified that post-quantum security requires problems fundamentally different from factoring and discrete logarithms: problems without the periodic structure that quantum algorithms exploit.
Lattice problems provide exactly this. Unlike the algebraic structures underlying RSA (the multiplicative group modulo ) or ECC (elliptic curve point groups), lattices are geometric objects where the relevant hard problems, finding short vectors or solving noisy linear equations, have no known efficient quantum algorithms.
This article provides an exhaustive mathematical treatment of lattice-based cryptography, from the foundational geometry through the modern constructions standardized by NIST.
The Geometry of Lattices
Definition and Basic Properties
A lattice is a discrete additive subgroup of . Equivalently, given a set of linearly independent vectors where , the lattice generated by is:
The vectors form a basis of the lattice. When , we call the lattice full-rank.
Key observations:
-
Non-unique basis: Every lattice has infinitely many bases. For any unimodular matrix (integer matrix with determinant ), generates the same lattice.
-
Fundamental parallelepiped: The region tiles and has volume equal to .
-
Determinant invariance: While bases vary, the lattice determinant is invariant across all bases.
Successive Minima and the Shortest Vector
For a lattice , the -th successive minimum is the smallest radius of a ball centered at the origin containing linearly independent lattice vectors:
where is the ball of radius .
The first successive minimum is the length of the shortest non-zero vector:
Minkowski's Theorem provides a fundamental bound:
This bound is tight up to constants for "random" lattices, but finding a vector achieving this bound is computationally hard.
The Gram-Schmidt Orthogonalization
Given a basis , the Gram-Schmidt orthogonalization (GSO) produces orthogonal vectors :
The GSO vectors do not form a lattice basis (they typically have non-integer coefficients), but they characterize basis quality:
- Good basis: GSO vectors have similar lengths
- Bad basis: GSO lengths decay rapidly (last vectors much shorter)
The quality of a basis directly impacts the difficulty of lattice problems: a more orthogonal basis makes finding short vectors easier.
Computational Lattice Problems
The Shortest Vector Problem (SVP)
Exact SVP: Given a basis for lattice , find a non-zero vector with .
Approximate SVP (-SVP): Find a non-zero vector with for approximation factor .
| Approximation Factor | Known Complexity | Relevance |
|---|---|---|
| (exact) | NP-hard under randomized reductions | Theoretical |
| No known polynomial algorithm | Cryptographic | |
| Polynomial time (LLL) | Insufficient | |
| Exponential time (enumeration) | Attack baseline |
Cryptographic security relies on the gap between what's achievable in polynomial time (exponential approximation factors) and what's needed to break schemes (polynomial approximation factors). This gap appears robust against quantum algorithms.
The Closest Vector Problem (CVP)
Exact CVP: Given a basis for lattice and a target vector , find the lattice vector closest to :
CVP and SVP are intimately related:
- CVP is at least as hard as SVP: There's an efficient reduction from SVP to CVP
- SVP is a special case: SVP asks for the closest vector to the origin, excluding the origin itself
- Both are NP-hard: Under appropriate reductions for exact and approximate versions
The Shortest Independent Vectors Problem (SIVP)
-SIVP: Given a basis for an -dimensional lattice , find linearly independent lattice vectors such that:
SIVP is crucial for security reductions: Ajtai's and Regev's reductions connect average-case cryptographic problems to worst-case SIVP.
The Learning With Errors Problem
The Learning With Errors (LWE) problem, introduced by Oded Regev in 2005, is the cryptographic foundation of modern lattice-based schemes. It provides a clean, algebraic formulation with provable connections to worst-case lattice problems.
Problem Definition
Let where is a modulus (typically prime), and let be an error distribution over (typically discrete Gaussian).
Search-LWE: Given polynomially many samples of the form:
where uniformly random, is a fixed secret, and is a "small" error, find .
Decision-LWE: Distinguish LWE samples from uniform random pairs where uniformly.
Matrix Formulation
In matrix form, LWE becomes: given
where , , , find or distinguish from random.
This is essentially a system of linear equations in unknowns, but each equation is perturbed by noise:
Without the error terms , Gaussian elimination would solve this in polynomial time. The errors, though small, render the system information-theoretically soluble but computationally intractable.
The noise prevents algebraic attacks (Gaussian elimination fails). The "small noise hiding in large modular reductions" creates computational hardness that resists known quantum algorithms. No quantum Fourier transform technique extracts the periodic structure needed for a Shor-like attack.
Regev's Security Reduction
Theorem (Regev 2005): For appropriate parameter choices, solving LWE on average is at least as hard as solving -SIVP in the worst case for , where is the Gaussian width.
This is a quantum reduction: an efficient LWE algorithm implies an efficient quantum algorithm for approximate SIVP. Peikert later provided a classical reduction from GapSVP to LWE.
The reduction works by showing that an LWE oracle can be used to iteratively sample from discrete Gaussian distributions on lattices, which in turn solves SIVP.
Implications:
- If -SIVP is hard (no efficient quantum algorithm known), then LWE is hard
- Breaking any LWE-based cryptosystem with suitable parameters would break a fundamental lattice problem
- This is worst-case hardness: even the hardest SIVP instances are no harder than LWE
Parameter Trade-offs
LWE security depends on the interplay of:
| Parameter | Symbol | Effect on Security | Effect on Efficiency |
|---|---|---|---|
| Dimension | ↑ Higher → harder | ↓ Larger keys/operations | |
| Modulus | ↓ Larger → easier attacks | ↑ Larger values, more bits | |
| Error width | ↑ Larger → harder | ↓ Less margin for correctness | |
| Samples | Initially ↑, then saturates | ↓ More computation |
The error-to-modulus ratio is critical:
- Too small → not enough noise, algebraic attacks succeed
- Too large → decryption fails (errors overwhelm signal)
- Sweet spot → cryptographic security with high probability correctness
Structured Lattices: Ring-LWE and Module-LWE
Standard LWE requires storage for the public matrix and operations for matrix-vector products. For practical cryptosystems, this is prohibitive.
Structured variants exploit algebraic structure to achieve efficiency while (hopefully) preserving security.
Ring-LWE
Ring-LWE works over the polynomial ring where is typically with a power of 2.
Elements: Polynomials of degree with coefficients in :
Operations: Addition is coefficient-wise. Multiplication is polynomial multiplication modulo .
Ring-LWE Problem: Given samples , find or distinguish from random.
Advantages:
- Compact keys: Public key is a single ring element, not an matrix
- Fast operations: Polynomial multiplication can be done in using the Number Theoretic Transform (NTT)
Security Considerations:
Ring-LWE's security reduces to worst-case problems on ideal lattices, a special class of lattices with additional algebraic structure. There's ongoing debate about whether this structure could be exploited by specialized attacks.
Module-LWE: The Best of Both Worlds
Module-LWE generalizes Ring-LWE by working with vectors and matrices of ring elements.
Setup: Let . A Module-LWE instance uses:
- A public matrix
- A secret vector
- An error vector with small coefficients
Relation to other LWE variants:
| Variant | Matrix Size | Ring Dimension | Total Dimension |
|---|---|---|---|
| Plain LWE | 1 | ||
| Ring-LWE | |||
| Module-LWE |
Module-LWE with rank provides a tunable security-efficiency trade-off:
- : Equivalent to Ring-LWE (most efficient, potentially more structure to exploit)
- : Approaches plain LWE (less efficient, fewer structural concerns)
- : Practical middle ground (used by Kyber/Dilithium)
Module-LWE offers a Goldilocks balance: efficient enough for practical deployment (keys in kilobytes, not megabytes), with security based on problems with less exploitable structure than Ring-LWE, yet still benefiting from algebraic efficiency gains.
Worst-Case to Average-Case Reductions
The crown jewel of lattice-based cryptography is its security proofs: breaking a cryptographic scheme implies breaking a worst-case lattice problem.
Ajtai's Breakthrough (1996)
Miklós Ajtai proved the first such reduction for the Short Integer Solution (SIS) problem.
SIS Problem: Given a random matrix , find a short non-zero vector such that:
Theorem (Ajtai 1996): Solving average-case SIS with columns implies solving worst-case -SIVP for .
This means:
- If any random can be efficiently attacked, worst-case SIVP can be solved
- Security of SIS-based hash functions rests on foundational lattice hardness
Regev's Reduction for LWE (2005)
Regev extended Ajtai's approach to LWE via a more intricate quantum reduction.
High-level idea:
- Start with a -SIVP instance (a lattice where we want short independent vectors)
- Use quantum sampling to generate points from the lattice's "smoothed" distribution
- Transform these samples into LWE samples
- Use the LWE oracle to recover information about the lattice
- Iterate to find progressively shorter vectors, solving SIVP
Key insight: Discrete Gaussian distributions on lattices have beautiful mathematical properties. Sampling from them (which quantum computers can help with) provides information about short lattice vectors.
Classical Reduction via Peikert
Peikert (2009) showed a classical reduction from GapSVP to LWE, removing the quantum requirement but with somewhat worse parameters.
GapSVP: Given a basis, determine whether or (promised one holds).
Theorem (Peikert 2009): For , solving decision LWE classically implies solving GapSVP classically.
What the Reductions Mean
These reductions provide fundamentally different security guarantees than pre-quantum cryptography:
| Cryptosystem | Security Basis | Type of Hardness |
|---|---|---|
| RSA | Factoring | Average-case only (random moduli) |
| ECC | ECDLP | Average-case only (random curves) |
| LWE-based | SVP/SIVP | Worst-case → average-case reduction |
If someone breaks an LWE-based scheme, they've solved a problem that the entire mathematical community believes is fundamentally hard. This is a qualitatively stronger security foundation.
The Number Theoretic Transform
Efficient lattice cryptography relies on fast polynomial multiplication. The Number Theoretic Transform (NTT) is the modular arithmetic analog of the Fast Fourier Transform.
From FFT to NTT
The FFT computes the Discrete Fourier Transform using complex roots of unity. The NTT uses roots of unity in a finite field .
Requirement: must be a prime with , ensuring primitive -th roots of unity exist in .
For Kyber, , so primitive -th roots of unity exist.
NTT Definition
Let be a primitive -th root of unity (, but for ).
The NTT of a polynomial is:
The inverse NTT recovers the original coefficients:
where is the modular inverse of .
NTT-Based Polynomial Multiplication
To multiply polynomials :
- Forward NTT: Compute and
- Pointwise multiplication:
- Inverse NTT:
Complexity:
- Naive polynomial multiplication:
- NTT-based multiplication:
For (Kyber's ring dimension), this is a 50x speedup.
NTT Domain Representation
Kyber and Dilithium store polynomials in NTT domain whenever possible:
- Key generation produces NTT-domain keys
- Encryption/signing work in NTT domain
- Only convert back for operations requiring coefficient access
This optimization eliminates redundant NTT/inverse-NTT pairs.
ML-KEM (CRYSTALS-Kyber): In-Depth
ML-KEM (Module-Lattice Key Encapsulation Mechanism), standardized as FIPS 203, is the primary post-quantum key encapsulation mechanism. We now examine its cryptographic construction in detail.
The IND-CPA Encryption Scheme
At its core, ML-KEM builds on an IND-CPA (indistinguishable under chosen-plaintext attack) public-key encryption scheme.
Key Generation
- Generate random matrix (derived deterministically from a seed for compactness)
- Sample secret and error with small coefficients
- Compute
Public key: — in practice, is a seed, is explicit
Secret key:
Encryption
To encrypt message :
- Sample ephemeral secret and errors , with small coefficients
- Compute:
Ciphertext:
The message is encoded by scaling: each bit becomes either or in the ciphertext.
Decryption
To decrypt :
- Compute
- For each coefficient, check if it's closer to or
- Output corresponding bit
Why it works:
Substituting :
The error terms are small (bounded by parameters). The message term dominates, allowing correct recovery.
The Fujisaki-Okamoto Transform
The IND-CPA scheme above is vulnerable to chosen-ciphertext attacks. The Fujisaki-Okamoto (FO) transform upgrades it to IND-CCA2 security.
Encapsulation:
- Generate random message
- Derive randomness for encryption
- Compute ciphertext
- Compute shared secret
Decapsulation:
- Decrypt
- Re-encrypt to get
- Verify: If , return ; otherwise return random
The re-encryption check is crucial: it defeats attacks that submit malformed ciphertexts to extract information about the secret key.
ML-KEM Parameter Sets
| Parameter | ML-KEM-512 | ML-KEM-768 | ML-KEM-1024 |
|---|---|---|---|
| NIST Level | 1 | 3 | 5 |
| Module rank | 2 | 3 | 4 |
| Ring dimension | 256 | 256 | 256 |
| Modulus | 3329 | 3329 | 3329 |
| Public key (bytes) | 800 | 1,184 | 1,568 |
| Ciphertext (bytes) | 768 | 1,088 | 1,568 |
| Shared secret (bytes) | 32 | 32 | 32 |
NIST security levels correspond to specific attack costs: Level 1 ≈ AES-128, Level 3 ≈ AES-192, Level 5 ≈ AES-256. The module rank increases with security level, scaling the underlying lattice dimension from to to .
ML-DSA (CRYSTALS-Dilithium): In-Depth
ML-DSA (Module-Lattice Digital Signature Algorithm), standardized as FIPS 204, provides post-quantum digital signatures. It uses a fundamentally different construction from ML-KEM.
The Fiat-Shamir with Aborts Paradigm
Dilithium is based on the Fiat-Shamir transform applied to an identification scheme, with a crucial modification: rejection sampling.
Basic Fiat-Shamir intuition:
- Prover commits to a random value
- Verifier sends a challenge
- Prover responds, proving knowledge without revealing secret
- To sign, the "challenge" becomes a hash of the message
The problem: In lattice-based schemes, the response directly depends on the secret key. Repeated signatures could leak information.
Solution: Rejection sampling. The signer checks if the response would leak information; if so, they restart with fresh randomness.
Dilithium Construction
Key Generation
- Sample random matrix (from a seed)
- Sample secret vectors and with "small" coefficients
- Compute
Public key: (or seed + )
Secret key:
Signing
To sign message :
- Sample random "masking" polynomial with coefficients in
- Compute
- Compute challenge where is a polynomial with few non-zero coefficients
- Compute response
- Rejection check: If has any coefficient or if is too large, restart from step 1
- Output signature
Signature: where is a "hint" for efficient verification
Verification
To verify with public key :
- Recover (approximately, using hint )
- Check
- Check that has coefficients in the expected range
The Rejection Sampling Analysis
The rejection step ensures that the distribution of is statistically independent of the secret .
Without rejection, correlates with . After many signatures, an adversary could extract .
With rejection sampling: only values that could have come from many different secrets are output. The distribution is "masked" by the large random value .
Expected number of attempts: Approximately rejections per signature. This is the cost of provable security.
ML-DSA Parameter Sets
| Parameter | ML-DSA-44 | ML-DSA-65 | ML-DSA-87 |
|---|---|---|---|
| NIST Level | 2 | 3 | 5 |
| Matrix dimensions | 4 × 4 | 6 × 5 | 8 × 7 |
| Public key (bytes) | 1,312 | 1,952 | 2,592 |
| Signature (bytes) | 2,420 | 3,293 | 4,595 |
| Ring dimension | 256 | 256 | 256 |
| Modulus | 8,380,417 | 8,380,417 | 8,380,417 |
Note the larger modulus () compared to Kyber (). This reflects the different algebraic requirements of signatures vs. encryption.
Lattice Basis Reduction Algorithms
Understanding the attacker's tools is essential for security analysis. Lattice reduction algorithms attempt to find short vectors and are the primary attack vector against lattice cryptography.
The LLL Algorithm
The Lenstra-Lenstra-Lovász (LLL) algorithm (1982) is the first polynomial-time lattice reduction algorithm.
Definition: A basis is LLL-reduced with parameter if:
- Size condition: for all
- Lovász condition:
Output guarantee: The shortest vector in an LLL-reduced basis satisfies:
This is an exponential approximation to SVP: good for low dimensions, useless for high dimensions.
Complexity: where is a bound on input basis norms. Polynomial in dimension.
The BKZ Algorithm
Block Korkine-Zolotarev (BKZ) improves on LLL by solving SVP in projected sublattices of block size .
Algorithm outline:
- Apply LLL to get an initial reduced basis
- For each block of consecutive vectors, solve (approximate) SVP in the projected sublattice
- Insert the found short vector, re-reduce, and repeat
- Continue until no improvement
Output quality: With block size , BKZ finds a vector with:
where is the Hermite factor.
Example security implications:
| Block Size | Approximate Factor | Time Complexity | Security Impact |
|---|---|---|---|
| 2 | Exponential | Polynomial | None |
| 20 | Hours-days | Low dimensions broken | |
| 50 | Months-years | Medium security | |
| 300+ | Infeasible | Target for PQC |
Security analysis for NIST PQC uses the "Core-SVP" model: the cost to break a scheme is estimated as operations for BKZ with block size needed to recover short vectors. Parameters are chosen so this cost exceeds (Level 1) or (Level 5).
Known Attack Strategies
Primal attack: Construct a lattice where the LWE secret/error is a short vector. Use BKZ to find it.
Dual attack: Construct a short vector in the dual lattice, use it to distinguish LWE from random.
Hybrid attack: Guess part of the secret (reduces dimension), use lattice reduction on the rest.
All known attacks are exponential in the lattice dimension. No quantum algorithm provides more than polynomial speedup for lattice problems.
Security Analysis and Parameter Selection
Security Estimation
The security of ML-KEM and ML-DSA parameters is estimated via:
- Core-SVP model: Estimate BKZ block size needed
- Gate count: Translate to classical/quantum gate operations
- Comparison to AES: Match to corresponding NIST security levels
Current estimates (approximate classical/quantum bit security):
| Scheme | Classical Security | Quantum Security |
|---|---|---|
| ML-KEM-512 | 118 bits | ~107 bits |
| ML-KEM-768 | 183 bits | ~164 bits |
| ML-KEM-1024 | 256 bits | ~230 bits |
| ML-DSA-44 | 128 bits | ~115 bits |
| ML-DSA-65 | 192 bits | ~172 bits |
| ML-DSA-87 | 256 bits | ~230 bits |
These estimates assume the best known algorithms. A breakthrough in lattice reduction could change the picture.
Conservative vs. Aggressive Parameters
Conservative approach (recommended):
- Use ML-KEM-768/1024 and ML-DSA-65/87 for systems requiring decades of security
- Accounts for potential algorithmic advances
- Larger keys/signatures, but manageable
Aggressive approach (performance-critical systems):
- ML-KEM-512 and ML-DSA-44 for current security needs
- May require algorithm agility for future updates
- Smaller overhead
Side-Channel Considerations
Lattice schemes have specific side-channel vulnerabilities:
- Timing attacks: NTT operations, rejection sampling timing can leak information
- Power analysis: Polynomial operations have distinguishable power traces
- Fault attacks: Corrupting intermediate values can reveal secrets
Mitigations:
- Constant-time implementations (no secret-dependent branches)
- Masking: split sensitive values into random shares
- Careful randomness generation
NIST standards include guidance on side-channel resistant implementation.
The Falcon Signature Scheme
While ML-DSA is NIST's primary lattice-based signature, Falcon offers an alternative with smaller signatures.
NTRU Lattices
Falcon builds on NTRU-type lattices:
where is a public polynomial and is a secret key with short coefficients.
GPV Framework
Falcon implements the Gentry-Peikert-Vaikuntanathan (GPV) hash-and-sign paradigm:
- Hash the message to a target point
- Use a "trapdoor" (the secret key) to sample a short preimage with
The trapdoor enables Gaussian sampling from the lattice solution space.
Fast Fourier Sampling
Falcon's efficiency comes from its sampling algorithm:
- Represent the lattice basis in FFT form
- Sample Gaussian values using tree-structured computation
- Convert back to get signature polynomials
This achieves signature generation with minimal signature size.
Comparison with ML-DSA
| Property | ML-DSA-65 | Falcon-512 |
|---|---|---|
| NIST Level | 3 | 1 |
| Public key (bytes) | 1,952 | 897 |
| Signature (bytes) | 3,293 | 666 |
| Sign speed | Very fast | Fast |
| Verify speed | Very fast | Fast |
| Implementation complexity | Lower | Higher |
Falcon's smaller signatures come at the cost of more complex implementation, particularly the Gaussian sampler which must avoid side channels.
Conclusion and Series Overview
In this second part, we've provided a comprehensive treatment of lattice-based cryptography:
- Lattice geometry: Bases, successive minima, Gram-Schmidt, and the structure of lattice problems
- Computational problems: SVP, CVP, SIVP, and their approximation variants
- Learning With Errors: The algebraic foundation with worst-case security reductions
- Structured lattices: Ring-LWE and Module-LWE for practical efficiency
- NIST standards: Deep dives into ML-KEM (Kyber) and ML-DSA (Dilithium) constructions
- Security analysis: Lattice reduction, parameter selection, and attack cost estimation
Coming in Part 3: Alternative Mathematical Frameworks
The final part of this series explores non-lattice approaches to post-quantum cryptography:
- Hash-based signatures (SLH-DSA/SPHINCS+): Security from Merkle trees and hash functions only
- Code-based cryptography (Classic McEliece): Goppa codes and syndrome decoding
- Isogeny-based cryptography: Supersingular elliptic curves and the SIKE mathematical attack
- Multivariate cryptography: System of polynomial equations and the MQ problem
Lattice-based cryptography achieves the gold standard of cryptographic security proofs: breaking practical schemes implies solving problems that mathematicians have studied for decades with no efficient algorithms found. The Module-LWE foundation of ML-KEM and ML-DSA combines theoretical soundness with practical efficiency, which is why these are NIST's primary standardized algorithms.
References and Further Reading
Foundational Papers
- Regev, O. (2005). "On Lattices, Learning with Errors, Random Linear Codes, and Cryptography" — The LWE paper
- Ajtai, M. (1996). "Generating Hard Instances of Lattice Problems" — First worst-case to average-case reduction
- Lyubashevsky, V., Peikert, C., Regev, O. (2010). "On Ideal Lattices and Learning with Errors over Rings" — Ring-LWE
- Peikert, C. (2009). "Public-Key Cryptosystems from the Worst-Case Shortest Vector Problem" — Classical LWE reduction
NIST Standards and Reports
- FIPS 203 (2024). "Module-Lattice-Based Key-Encapsulation Mechanism Standard" (ML-KEM)
- FIPS 204 (2024). "Module-Lattice-Based Digital Signature Standard" (ML-DSA)
- NIST IR 8413 (2022). "Status Report on the Third Round of the NIST Post-Quantum Cryptography Standardization Process"
- NIST IR 8545 (2024). "Status Report on the Fourth Round"
Surveys and Textbooks
- Micciancio, D. & Regev, O. (2008). "Lattice-based Cryptography" — Comprehensive survey
- Peikert, C. (2016). "A Decade of Lattice Cryptography" — Modern developments
- Galbraith, S.D. (2012). "Mathematics of Public Key Cryptography" — Textbook with lattice chapters
Implementation Resources
- CRYSTALS-Kyber specification: pq-crystals.org/kyber
- CRYSTALS-Dilithium specification: pq-crystals.org/dilithium
- Falcon specification: falcon-sign.info
Appendix: Mathematical Reference
Key Inequalities
Minkowski's bound on shortest vector:
Hermite's constant (asymptotic):
LWE noise bound for correctness (simplified):
Polynomial Ring Arithmetic
For :
- Addition: Coefficient-wise mod
- Multiplication: Polynomial product mod and mod
- Reduction by : Since , terms become
Discrete Gaussian Distribution
The discrete Gaussian over with width :
Samples concentrate near zero with standard deviation .
Complexity Summary
| Operation | Complexity | Used In |
|---|---|---|
| Matrix-vector multiply (standard) | Plain LWE | |
| NTT-based polynomial multiply | Ring/Module-LWE | |
| LLL reduction | Attack foundation | |
| BKZ with block size | Security estimation |
On this page
- The Geometry of Lattices
- Definition and Basic Properties
- Successive Minima and the Shortest Vector
- The Gram-Schmidt Orthogonalization
- Computational Lattice Problems
- The Shortest Vector Problem (SVP)
- The Closest Vector Problem (CVP)
- The Shortest Independent Vectors Problem (SIVP)
- The Learning With Errors Problem
- Problem Definition
- Matrix Formulation
- Regev's Security Reduction
- Parameter Trade-offs
- Structured Lattices: Ring-LWE and Module-LWE
- Ring-LWE
- Module-LWE: The Best of Both Worlds
- Worst-Case to Average-Case Reductions
- Ajtai's Breakthrough (1996)
- Regev's Reduction for LWE (2005)
- Classical Reduction via Peikert
- What the Reductions Mean
- The Number Theoretic Transform
- From FFT to NTT
- NTT Definition
- NTT-Based Polynomial Multiplication
- NTT Domain Representation
- ML-KEM (CRYSTALS-Kyber): In-Depth
- The IND-CPA Encryption Scheme
- The Fujisaki-Okamoto Transform
- ML-KEM Parameter Sets
- ML-DSA (CRYSTALS-Dilithium): In-Depth
- The Fiat-Shamir with Aborts Paradigm
- Dilithium Construction
- The Rejection Sampling Analysis
- ML-DSA Parameter Sets
- Lattice Basis Reduction Algorithms
- The LLL Algorithm
- The BKZ Algorithm
- Known Attack Strategies
- Security Analysis and Parameter Selection
- Security Estimation
- Conservative vs. Aggressive Parameters
- Side-Channel Considerations
- The Falcon Signature Scheme
- NTRU Lattices
- GPV Framework
- Fast Fourier Sampling
- Comparison with ML-DSA
- Conclusion and Series Overview
- Coming in Part 3: Alternative Mathematical Frameworks
- References and Further Reading
- Foundational Papers
- NIST Standards and Reports
- Surveys and Textbooks
- Implementation Resources
- Appendix: Mathematical Reference
- Key Inequalities
- Polynomial Ring Arithmetic
- Discrete Gaussian Distribution
- Complexity Summary