Time-space Tradeoffs and Optimizations in BKW Algorithm

Slide Note
Embed
Share

Time-space tradeoffs and optimizations play a crucial role in the BKW algorithm, particularly in scenarios like learning parity with noise (LPN) and BKW algorithm iterations. The non-heuristic approach in addressing these tradeoffs is discussed in relation to the hardness of the LPN problem and the BKW algorithm's main idea, focusing on finding coefficients efficiently. Additionally, the -sum BKW approach presents time-memory trade-offs with potential optimizations. The content explores complexities, optimizations, and trade-offs between space, sample, and time, offering insights on improving efficiency and reducing complexities.


Uploaded on Oct 10, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. A Non-heuristicApproach to Time-space Tradeoffs and Optimizations for BKW Hanlin Liu, Yu Yu

  2. Learning Parity with Noise (LPN) $ $ ? ? ?,? ?,? Ber? m,? = ??+ ? Z2 Z2 Noise rate: Pr[??=1]= ? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? e y A x (mod 2) + = Decisional LPN Distinguish between Search LPN Given ?,? = ??+ ?, find out ? poly $Z2 ?) (?, ? = ??+ ?) & (?, ?

  3. The hardness of LPN Problem Best attack BKW Time cost = 2(1+?(1))?/ log ? Sample cost = 2(1+?(1))?/ log ? Constant Constant- -noise constant 0 < ? < 1/2 noise LPN Time cost= 2(1+?(1))?/ loglog ?Sample cost = ?(1+?) Lyu Low Low- -noise noise LPN ? = ? ? constant 0 < ? < 1 Time cost = 2?(??) Folklore Sample cost = O(n) Extremely low Extremely low- -noise ? =log2? noise LPN Time cost = ??(log ?) Folklore Sample cost = O(n) ? [BKW03] Blum, A., Kalai, A., & Wasserman, H. (2003). Noise-tolerant learning, the parity problem, and the statistical query model. Journal of the ACM (JACM), 50(4), 506-519. [Lyubashevsky s tradeoff] Vadim, L. (2005). The parity problem in the presence of noise, decoding random linear codes, and the subset sum problem. In RANDOM 2005, pages 378 389, 2005.

  4. (A single iteration of) the BKW ? Classify Merge 1st row ? ?? + ? 2? 1st row ? ? ? + ? ? 1st row ?

  5. BKW Algorithm Main idea (for finding the 1stcoefficient of s): 1. Find ? = 2n/?vectors: ?1 ?2 ??= 10 0 Pr [?1= ?1 ??] + 1 2?? 2. Repeat step : repeat step 1 for 1 2? ?times to determine ?1 Complexities (for ? =1 Space: 2? Time/Sample: 2? 22?/? 2 Questions: 1. Tradeoffs between space/sample & time Esser et al. (Crypto 2018) gave a heuristic-based algorithm 4) 1 1+? ? log ? 2? 1+? ? log ?) (set ? = 1+? 1 2. Time/sample optimization: remove factor 2? 3. Reduce sample complexity. 1+?(step 2)

  6. The ?-sum BKW: Time-Memory Trade-offs Naive ?-sum BKW (Esser et al., Crypto 2018) ? ?? + ? Pick ? 1 rows and XOR them together pick rows start with by Binary sort XOR two samples and loop the above steps ? Loop to create enough samples Original BKW can be seen as 2-sum BKW Optimize naive ?-sum BKW by Dinur et al. s dissection technique and Grover s quantum algorithm

  7. A framework for optimizing BKW Each ?0,? has 2-wise independent vectors Example ? = 3 mutually independent ?0,6 ?0,8 ?0,5 ?0,4 ?0,2 ?0,1 ?0,3 ?0,9 ?0,7 ?-sum+ ?-sum+ ?-sum+ ?1,2 ?1,3 ?1,1 ?-sum+ Majority voting ?2,1 2-wise independence Suffices for rigorous (no-heuristic) analysis Enable various sample-time/space tradeoffs ? ?+?times No repeating ?? 1 Majority vote on the final ?2,1(saving factor 2? 1+?)

  8. A framework for optimizing BKW Example ? = 3 mutually independent ?+1 ? 1 |?1,1| 2 ?+1 ? 1 Each |?0,?| 2 Each ?0,? has 2-wise independent vectors ?0,2 ?0,1 ?0,3 ?0,8 ?0,6 ?0,5 ?0,4 ?0,2 ?0,1 ?0,3 ?0,9 ?0,7 ?-sum+ ?1,1 ?-sum+ ?-sum+ ?-sum+ ? ?1,2 ?1,3 ?1,1 Each ?1,1 has 2-wise independent vectors ?-sum+ Majority voting ?2,1

  9. Result I: time-space tradeoff Algorithm Space Time parameters ? ? Classic Original BKW log ? (1+?) ? = 2 log ? ? 1 ?log ? Na ve ?-sum+ ? 2 ? log ? ? 1 Dissection ?-sum+ ? = 4,7,11, ?log ? (? 2? ? 1) ? ? log ? ? 1 Tailored ? ?log ? (? ? 2? ) ? ? 1, ? 1 Dissection ?-sum+ ? 1 log ? ? 1 log ? 2 ? = 4,7,11, Quantum Naive + Grover ? ? Same result as (Esser et al., Crypto 2018) but without using heuristics

  10. Result II: time-sample optimization Algorithm Space ?1 Time ?1 ?2 Sample ?1 ?2 ?1 Condition ?1 ?2 ?1 ?2 BKW (JACM 2003) Devadas et al. s (TCC 2017) ?1 ?2 ?1 ?2 ?1 ?1 ?1 ?1 ?2 Ours ?1= 2? ?2= 1/?2?/? ? = (1 ?)/2 1 1+? ? log ? E.g. ? = 1/4 ?2= 2? 1+? ?1= 2

  11. Lyubashevskys Sample/Time Tradeoff ? ?? + ? ? w = O log ? ?1+? randomly pick ?-tuples Sample Amplifier w w w sum sum sum BKW Statistically close to real LPN samples by Leftover Hash Lemma

  12. Our optimization ?1+? Split into log n groups ? w = O log ? loglog ? Sample Amplifier ?1+? log ? Pick out all w-tuples of each group and XOR each w-tuple together ?1+? log ? ?1+? log ? ?0,log ? 1 ?0,1 ?0,2 ?0,2? 1 ?0,2? ?0,log ? ?0,1, ,?0,log ? mutually independent ?-sum+ ?-sum+ ?-sum+ Each ??,? has 2-wise independent vectors ?1,log ? ?1,i ?1,1 Our Optimized 2-sum BKW 2 ?-sum+ Majority voting ?2,1

  13. Result III: sample/time tradeoff Sample 2?? Algorithm Space ?1 Time ?1 ?2 Condition [Lyubashevsky05] ?1 ?2 (?1)log ? ?2 ?1 ?1 Ours For constant ? ?1= 2?(?/ log ?),?2= 2?1 ? Sample ?1+? Algorithm Space ?1 ?1 Time ?1 ?2 ?1 Condition ?1 ?2 (?1)log log ? ?2 [Lyubashevsky05] Ours For constant ? ?1= 2?(?/ log log ?),?2= 2?1 ?

  14. Thanks for Listening!

Related


More Related Content