Advanced Security Concepts in SNARGs Using iO and Lossy Functions
Explore the latest research on Adaptive and Selective Soundness in Succinct Non-interactive Argument of Knowledge (SNARGs), presenting theorems and the inclusion of subexponentially secure techniques like indistinguishability obfuscation, one-way functions, and very lossy functions. Discover the potential for adaptively sound SNARGs for NP problems based on various security assumptions. Delve into the complexities of leveraging guess wins and the challenges of parameter growth in security parameters.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
Adaptive Security in SNARGs via iO and Lossy Functions Brent Waters NTT Research, UT Austin Mark Zhandry NTT Research
What Are SNARGs? (Succinct Non-interactive Arguments) Succinctness:
What Are SNARGs? (Succinct Non-interactive Arguments) No succinctness requirements Succinctness:
Theorem [Waters-Wu24]: There exists an adaptively sound SNARG for NP, assuming all of the following: Subexponentially secure Indistinguishability Obfuscation Subexponentially secure One-Way Functions (OWF) Polynomially secure perfectly re-randomizeable OWFs Known from discrete logs, factoring, or perfect group actions. Not known from LWE
Theorem [THIS WORK]: There exists an adaptively sound SNARG for NP, assuming all of the following: Subexponentially secure Indistinguishability Obfuscation Subexponentially secure One-Way Functions Polynomially secure Very Lossy Functions Theorem [THIS WORK]: LWE Very Lossy Functions Existing LWE-based lossy functions not very lossy (e.g. [Peikert-Waters 08, Alwen- Krenn - Pietrzak-Wichs 13, Do ttling-Garg-Ishai-Malavolta-Mour-Ostrovsky 19, Hofheinz-Hosta kova -Kastner-Klein- U nal 24]) Krenn -
Subsequent Work Theorem [Waters-Wu 24b]: There exists an adaptively SNARG for NP, assuming all of the following: Subexponentially secure Indistinguishability Obfuscation Subexponentially secure One-Way Functions (OWF) Polynomially secure perfectly re-randomizeable OWFs
On Complexity Leveraging guess Wins if Assume sub-exponential selective soundness + set security parameter >> even exponentially small success probabilities impossible Problem: parameters grow with not succinct!
On Complexity Leveraging Complexity-leveraging is OK for SNARGs, but Any security parameter that appears in can only absorb losses independent of (though still potentially exponential) But can have separate security parameters affecting only the CRS which can absorb losses depending on
Waters-Wu First Step: Many OWF Instances One pseudorandom instance of OWF per statement = Obfuscation Magic Obfuscated programs False
Waters-Wu First Step: Many OWF Instances One pseudorandom of OWF per statement = Obfuscation Magic Obfuscated programs Complexity-leveraging based on the number of statements CRS contains obfuscated programs large CRS False Only OWF challenges in , OWF not used yet small
Very Lossy Functions Strengthening of [Peikert-Waters 08] Injective mode Lossy mode Very Lossy: lossy range size = , independent of domain size
Our Idea: Use Lossiness To Complete Proof Many statements map to same instance One pseudorandom instance of OWF per statement Lossiness + iO Still haven t used OWF challenges in security proof, so still small
Our Idea: Use Lossiness to Complete Proof Many statements map to same instance Now guess OWF instance (not statement) Reduction loss Can set Succinctness if is small exponential, but independent of Follows exactly from very lossy
Constructing Lossy Functions from LWE [Alwen-Krenn-Pietrzak-Wichs 13] Injective mode: full rank short vector
Lossy Mode ? short vector
Problem: Rounding Boundaries Only true far from rounding boundaries. Near rounding boundaries, output may statistically reveal Still lossy, but not very lossy
Our Solution: Stay Away From Rounding Boundaries , for smallest s.t. is far from rounding boundary Whp, there will exist some Only blow up image size by polynomial factor