Understanding Provable Security Models in Cryptography

Slide Note
Embed
Share

Cryptography and cryptology involve secure communication techniques to protect data from third-party adversaries. This article introduces provable security models, cryptographic goals like confidentiality and authenticity, and the approach of security by trial-and-error versus provable security methods. Learn about defining security, syntax of primitives, adversaries, and security conditions to ensure secure encryption and signature schemes.


Uploaded on Sep 06, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. INTRODUCTION TO PROVABLE SECURITY Models, Adversaries, Reductions

  2. CRYPTOGRAPHY / CRYPTOLOGY Greek krypt s, "hidden, secret"; from and graphein, "writing", or - -logia, "study", respectively is the practice and study of techniques for secure communication in the (called adversaries). presence of third parties Source : www.wikipedia.org

  3. SOME CRYPTOGRAPHIC GOALS Confidentiality Content of conversation remains hidden Authenticity Message is really sent by specific sender Integrity Message has not been modified Privacy: Sensitive (user) data remains hidden Covertcy The fact that a conversation is taking place is hidden .

  4. SECURITY BY TRIAL-AND-ERROR Identify goal (e.g. confidentiality in P2P networks) Design solution the strategy: Propose protocol Search for an attack If attack found, fix (go to first step) After many iterations or some time, halt Output: resulting scheme Problems: What is many iterations/ some time? Some schemes take time to break: MD5, RC4

  5. PROVABLE SECURITY Identify goal. Define security: Syntax of the primitive: e.g. algorithms (KGen, Sign, Vf) Adversary (e.g. can get signatures for arbitrary msgs.) Security conditions (e.g. adv. can t sign fresh message) Propose a scheme (instantiate syntax) Define/choose security assumptions Properties of primitives / number theoretical problems Prove security 2 step algorithm: Assume we can break security of scheme (adv. ?) Then build Reduction (adv. ?) breaking assumption

  6. PART II THE PROVABLE SECURITY METHOD

  7. THE ESSENCE OF PROVABLE SECURITY Core question: what does secure mean? Secure encryption vs. Secure signature scheme Say a scheme is secure against all known attacks will it be secure against a yet unknown attack? Step 1: Define your primitive (syntax) Signature Scheme: algorithms (KGen, Sign, Vf) * KGen(1?) outputs (sk, pk) * Sign(sk,m) outputs S (prob.) * Vf(pk,m,S) outputs 0 or 1 (det.)

  8. THE ESSENCE OF PROVABLE SECURITY Step 2: Define your adversary Adversaries ? can: know public information: ?, pk get no message/signature pair get list of message/signature pairs submit arbitrary message to sign Step 3: Define the security condition Adversary ? can output fresh (m,S) which verifies, with non-negligible probability (as a function of ?)

  9. THE ESSENCE OF PROVABLE SECURITY Step 4: Propose a protocol Instantiate the syntax given in Step 1. E.g. give specific algorithms for KGen, Sign, Vf. Step 5: Choose security assumptions For each primitive in the protocol, choose assumptions Security Assumptions (e.g. IND-CCA encryption) Number Theoretical Assumptions (e.g. DDH, RSA)

  10. THE ESSENCE OF PROVABLE SECURITY Step 6: Prove security For each property you defined in steps 1-3: Assume there exists an adversary ? breaking that security property with some probability ? Construct reduction ? breaking underlying assumption with probability f(?)

  11. HOW REDUCTIONS WORK Security assumptions are baseline Reasoning: If our protocol/primitive is insecure, then the assumption is broken But the assumption holds (by definition) Conclusion: The protocol cannot be insecure Caveat: Say an assumption is broken (e.g. DDH easy to solve) What does that say about our protocol? We don t know!

  12. PART III ASSUMPTIONS

  13. WE NEED COMPUTATIONAL ASSUMPTIONS Take our signature schemes (KGen, Sign, Vf) sk Sign ? 1? KGen m pk 0/1 Vf Correctness: if parameters are well generated, well-signed signatures always verify.

  14. WE NEED COMPUTATIONAL ASSUMPTIONS Take our signature schemes (KGen, Sign, Vf) sk ? Sign 1? KGen m pk 0/1 Vf Unforgeability: no adversary can produce signature for a fresh message m* But any ? can guess ?? with probability 1 2|??|

  15. WE NEED COMPUTATIONAL ASSUMPTIONS Take our signature schemes (KGen, Sign, Vf) sk ? Sign 1? KGen m pk 0/1 Vf Unforgeability: no adversary can produce signature for a fresh message m* And any ? can guess valid ? with probability 1 2|?|

  16. SOME COMPUTATIONAL ASSUMPTIONS Of the type: It is hard to compute ? starting from ?. How hard? Usually no proof that the assumption holds Mostly measured with respect to best attack Sometimes average-case, sometimes worst-case Relation to other assumptions: A 1 A 2: break A 2 => break A 1 A 1 A 2: break A 1 => break A 2 A 1 A 2: both conditions hold stronger weaker equivalent

  17. PART IV SECURITY MODELS

  18. IDEAL PROVABLE SECURITY Given protocol ?, assumptions ?1, ,?? Proof under ??, ,?? Ideal world Real world using ? Real World is hard to describe mathematically

  19. PROVABLE SECURITY Two-step process: Modelled world using ? Real world using ?

  20. PROVABLE SECURITY Proof under ??, ,?? Modelled world Ideal world

  21. COMPONENTS OF SECURITY MODELS Adversarial -priori knowledge & computation: Who is my adversary? (outsider, malicious party, etc.) What does my adversary learn? Adversarial interactions (party-party, adversary- party, adversary-adversary sometimes) What can my adversary learn How can my adversary attack? Adversarial goal (forge signature, find key, distinguish Alice from Bob) What does my adversary want to achieve?

  22. GAME-BASED SECURITY Participants Adversary ? plays a game against a challenger ? Adversary = attacker(s), has all public information Challenger = all honest parties, has public information and secret information Attack Oracles: ? makes oracle queries to ? to learn information Test: special query by ? to ? to which ? responds sometimes followed by more oracle queries Win/Lose: a bit output by ? at the end of the game

  23. MEASURING ADVERSARIAL SUCCESS Winning a game; winning condition: Depends on relation ? on ( ,< game >), with < game > = full game input (of honest parties and ? ) Finally, ? outputs ?, wins if (?,< game >) ? Success probability: What is the probability that ? wins the game? What is the probability measured over? (e.g. randomness in < game >, sometimes probability space for keys, etc.) Advantage of Adversary: How much better is ? than a trivial adversary?

  24. ADVERSARIAL ADVANTAGE Forgery type games: ? has to output a string of a longer size Best trivial attacks: guess the string or guess the key Advantage: Adv ? = Prob[? wins the game] Distinguishability-type games: ? must distinguish between 2 things: left/right, real/random Best trivial attacks: guess the bit (probability Advantage (different ways of writing it): Adv ? = Prob ? wins the game Adv ? = 2 | Prob ? wins the game 12) 12 12|

  25. SECURITY MODELS CONCLUSIONS Requirements: Realistic models: capture reality well, making proofs meaningful Precise definitions: allow quantification/classification of attacks, performance comparisons for schemes, generic protocol-construction statements Exact models: require subtlety and finesse in definitions, in order to formalize slight relaxations of standard definitions Provable security is an art, balancing strong security requirements and security from minimal assumptions

  26. EXAMPLE: PSEUDORANDOMNESS Perfect confidentiality exists: Given by the One-Time Pad ? ? ? XOR operation hides plaintext m entirely Disadvantages: Need long keys (as long as plaintext) Have to generate them at every encryption Generating long randomness: Use a pseudorandom generator!

  27. PRGS Principle: Start from small, truly random bitstring ?, generate large pseudorandom strings PRG:{0,1}? {0,1}?, for m n Security (intuitive): The adversary gets to see many output strings In practice: PRGs used for randomness in encryption, signature schemes, key-exchange Adversary s goals (examples): Predict next/former random number Cancel out randomness

  28. SECURE PRGS Ideally PRG output should look random Formally: Allow A to see either truly random or PRG output The adversary wins if it distinguishes them Security game: Challenger picks seed of generator (A does not get it) Challenger chooses a secret bit ? A can request random values ${0,1}? If ? = 1 then Challenger returns ? If ? = 0 then Challenger returns ? PRG(?) A must output a guess bit ? Winning condition: A wins iff. ? = ?

  29. THE SECURITY DEFINITION ${0,1}? $ 0,1 ? ????? (?,?) ? wins iff ? = ? ? Gen?() If ? = 1, return ? Else, return ? PRG(?) ? ${0,1}? Success probability is at least . Why ? (?,?)-secure PRG: A pseudorandom generator PRG is (?,?)-secure if, and only if, an adversary making at most ? queries to Gen?wins w.p. at most 1 2+ ?

  30. PART V PROOFS OF SECURITY

  31. PROOFS BY REDUCTION Say we have a primitive P We make assumptions ?1,?2 Goal: prove that if ?1,?2hold, then P is secure Statement: If there exists an adversary ? against P, then there exists adversaries ?1,?2against assumptions ?1,?2, such that: Adv ? ?(Adv ?1,Adv(?2)) Idea: if Adv(?) is significant, then so is at least one of Adv ?1,Adv ?2, breaking at least one assumption

  32. REDUCING SECURITY TO HARD PROBLEM Designed primitive has some game-based definition ? gets to query a challenger ? ? gets to set up the system There is a test phase ? will eventually answer the test and win/lose Hard problem of the form: given Input, find Output Strategy: use ? to construct solver ? for hard problem ? gets Input ? uses Input to run ? on some instance of ? s game Finally, ? receives ? s answer to its test ? processes ? s response into some Output

  33. REDUCTIONS Hard Problem ? = ?? ? Input Game setup Setup info Query Process Response Request test Embed problem Test Final response Output

  34. CONSTRUCTING A REDUCTION ? acts as a black-box algorithm (we don t know how it works in order to win its game) ? can send to ? whatever it wants. However: We want to bound ? s winning probability on ? s But, ? can only win if the game input is coherent So ? must simulate coherent input/output to ? s queries Also, ? must ultimately solve a hard problem To produce correct output, ? s test response must give ? the correct output with very high probability

  35. REDUCTIONS Hard Problem ? = ?? ? Input Game setup Setup info Query Related Process Response Request test Embed problem Test Final response Output

  36. REDUCTION TO SECURITY OF COMPONENT Designed primitive has some game-based definition ? gets to query a challenger ? ? gets to set up the system There is a test phase ? will eventually answer the test and win/lose Component also has game-based definition Strategy: use ? to construct solver ? for hard problem ? gets Setup info and can query its challenger ? embeds its game in some instance of ? s game Finally, ? receives ? s answer to its test ? processes ? s response into a test response of its own

  37. REDUCTIONS ?? ? = ?? ? Setup ? s Game Inject setup Setup ? s game Query Query Response Process Response Request test Request test Test Embed challenge Test Final response Reponse

  38. EXAMPLE: BIGGER PRG Say we have a secure pseudorandom generator: ?small:{0,1}? {0,1}? We want to construct a bigger PRG: ?big:{0,1}? {0,1}2? Instantiating ?big: ${0,1}? ?small? | ?small(?) Setup: choose ? Evaluation: ?big? Claim: If ??????is secure, then so is ????

  39. SECURITY OF OUR DESIGN Statement: For any ?,?big-adversary ? against the security of ?big, there exists a (2?,?small)-adversary ? against the security of ?smallsuch that: ?big ?small Both adversaries play the same game: ${0,1}? ? Gen?() If ? = 1, return ? Else, return ? PRG(?) $ ${0,1}? ? ? ????? (?,?) ? wins iff ? = ? 0,1

  40. CONSTRUCTING THE REDUCTION ?? ? = ?? ? ${0,1}? ? Setup done Setup done $ ? 0,1 Query Query Gen? ${0,1}? If ? = 1, return ? Else, return ? Gsmall(?) ?1 Query Gen? ?2 ?1| ?2 Guess bit ? Output ?

  41. ANALYSIS OF THE REDUCTION Number of queries: For each query, ? expects a 2? response, whereas ? only gets ? bits from its challenger Thus ? needs twice as many queries as ? Accuracy of ? s simulation of ?? In ? s game if ? = 1, ? gets 2? truly random bits And if ? = 0, it expects ?small? | ?small(?) ? queries its own challenger for output If ??drew bit ? = 1, it outputs ? truly random bits Else, it outputs ?small? The simulation is perfect: Pr ? wins = Pr[? wins]

  42. EXERCISES Why does this proof fail if we have two secure PRGs: ?1,?2:{0,1}? {0,1}?and we construct ?:{0,1}? {0,1}2?as follows: ${0,1}? ?1? | ?2(?) Setup: choose ? Evaluation: ? ? Will the proof work if ? ? ?1? ?2(?) ?

  43. EXAMPLES: DLOG, CDH, DDH Background: Finite field F, e.g. Z*p= {1, 2, , p-1} for prime p Multiplication, e.g. modulo p: 2 ? 2 = 2? 4 = ? 4 Element ? of prime order ?| (? 1) : ??= 1 (mod ?) AND ?? 1 mod ? ? < ? Cyclic group G = < ? > = {1,?,?2 ?? 1} DLog problem: Pick ? ?{1, , ?}. Compute ? = ??(mod ?). Given ?,?,?,? find ?. Assumed hard.

  44. EXAMPLES: DLOG, CDH, DDH DLog problem: Pick ? ?{1, , ?}. Compute ? = ??(mod ?). Given ?,?,?,? find ?. Assumed hard.

  45. EXAMPLES: DLOG, CDH, DDH DLog problem: Pick ? ?{1, , ?}. Compute ? = ??(mod ?). Given ?,?,?,? find ?. Assumed hard. CDH problem: Pick ?,? ?{1, , ?}. Compute ? = ??mod ? ; ? = ??(mod ?). Given ?,?,?,?,? find ???. Just to remind you: ???= ??= ?? ?? = ??+? Solve D-LOG Solve CDH Solve CDH Solve D-LOG

  46. EXAMPLES: DLOG, CDH, DDH DLog problem: Pick ? ?{1, , ?}. Compute ? = ??(mod ?). Given ?,?,?,? find ?. CDH problem: Pick ?,? ?{1, , ?}. Compute ? = ??mod ? ; ? = ??(mod ?). Given ?,?,?,?,? find ???. DDH problem: Pick ?,?,? ?{1, , ?}. Compute ?,? as above Given ?,?,?,?,? distinguish ??? from ??.

  47. HOW TO SOLVE THE DLOG PROBLEM In finite fields mod ?: Brute force (guess ?) O(?) Baby-step-giant-step: memory/computation tradeoff; O( ?) Pohlig-Hellman: small factors of ?; O(log??(log? + Pollard-Rho (+PH): O( ?) for biggest factor ? of ? NFS, Pollard Lambda, ?)) 1 3 ln(ln(?)) 2 3) Index Calculus: exp( ln? Elliptic curves Generic: best case is BSGS/Pollard-Rho Some progress on Index-Calculus attacks recently

  48. PARAMETER SIZE VS. SECURITY ANSSI Date Sym. RSA modulus 2048 2048 3072 DLog Key 200 200 200 DLog Group 2048 2048 3072 EC GF(p) 200 256 256 Hash <2020 <2030 >2030 100 128 128 200 256 256 BSI Date Sym. RSA modulus 2048 2048 3072 DLog Key 224 256 256 DLog Group 2048 2048 3072 EC GF(p) 224 256 256 Hash 2015 2016 <2021 128 128 128 SHA-224+ SHA-256+ SHA-256+

Related


More Related Content