Challenges in Computer Systems Security

Slide Note
Embed
Share

Understanding security in computer systems involves achieving goals in the presence of adversaries. However, this task is challenging due to the need to guarantee policies in the face of realistic and open-ended threat models. Issues can arise from problems with policy formulation, assumptions in threat models, and the inability to achieve perfect security. Strategies to address these challenges include rigorous policy evaluation, explicit threat modeling, and managing security risks against benefits.


Uploaded on Sep 06, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. 1 Intro CS 230 Computer Systems Security Marco Canini

  2. What is security? Achieving some goal in the presence of an adversary High-level plan for thinking about security: Policy Threat model Mechanism Result: no way for adversary within threat model to violate policy Note that goal has nothing to say about mechanism

  3. Why is security hard? Negative goal Need to guarantee policy, assuming the threat model Difficult to think of all possible ways that attacker might break in Realistic threat models are open-ended (almost negative models) Contrast: easy to check whether a positive goal is upheld e.g., Alice can actually read file F Weakest link matters Iterative process: design, update threat model as necessary, etc.

  4. What's the point if we can't achieve perfect security? What a system can do, and what a system cannot Manage security risk vs benefit Better security often makes new functionality practical and safe

  5. What goes wrong #1: problems with the policy Example: Sarah Palin's email account Example: Mat Honan's accounts at Amazon, Apple, Google, etc. Example: Twitter's @N account hijacking How to solve? Think hard about implications of policy statements Some policy checking tools can help, but need a way to specify what's bad Difficult in distributed systems: don't know what everyone is doing

  6. What goes wrong #2: problems with threat model / assumptions Example: human factors not accounted for Example: computational assumptions change over time Example: assuming your hardware is trustworthy Example: all SSL certificate CAs are fully trusted Example: machines disconnected from the Internet are secure? Stuxnet: Anatomy of a Computer Virus Example: assuming good randomness for cryptography Example: subverting military OS security Example: subverting firewalls

  7. What goes wrong #2: problems with threat model / assumptions How to solve? More explicit threat models, to understand possible weaknesses. Simpler, more general threat models. Better designs may eliminate / lessen reliance on certain assumptions. E.g., alternative trust models that don't have fully-trusted CAs. E.g., authentication mechanisms that aren't susceptible to phishing.

  8. What goes wrong #3 problems with the mechanism bugs Example: Apple's iCloud password-guessing rate limits Example: Missing access control checks in Citigroup's credit card web site Example: Android's Java SecureRandom weakness leads to Bitcoin theft Example: bugs in sandbox (NaCl, Javascript, Java runtime) Example: Moxie Marlinspike's SSL certificate name checking bug Example: buffer overflows Let s see a case study

  9. Case study: buffer overflows Consider a web server Often times, the web server's code is responsible for security E.g., checking which URLs can be accessed, checking SSL client certs, etc. Thus, bugs in the server's code can lead to security compromises

  10. Case study: buffer overflows What's the threat model, policy? Assume that adversary can connect to web server, supply any inputs Threat model: adversary doesn't have access to machine but can send any request it wants Policy is a bit fuzzy: only perform operations intended by programmer? E.g., don't want adversary to steal data, bypass checks, install backdoors What the code does: overly specific if there is a bug, the system does what the wrong code does. Web server software is the mechanism: it enforces the policy

  11. Case study: buffer overflows Consider the following simplified example code from a web server: int read_req(void) { char buf[128]; int i; gets(buf); i = atoi(buf); return i; } What can go wrong?

  12. Recall CPU, registers, x86 calling convention $esp: stack pointer, points to the last thing on the stack $ebp: base frame pointer, used to save the previous version of $esp $eip: instruction pointer, it s the next instruction executed by CPU int callee(int, int, int); int caller(void) { return callee(1, 2, 3) + 5; }

  13. Recall CPU, registers, x86 calling convention caller: ; make new call frame push %ebp ; save old call frame mov %esp -> %ebp ; initialize new call frame push 3 push 2 push 1 call callee ; call subroutine 'callee add %esp, 12 ; remove arguments from frame add %eax, 5 ; modify subroutine result ; (stored in eax) mov %ebp -> %esp ; restore old call frame pop %ebp ; restore old call frame ret ; return $esp: stack pointer, points to the last thing on the stack $ebp: base frame pointer, used to save the previous version of $esp int callee(int, int, int); int caller(void) { return callee(1, 2, 3) + 5; }

  14. Recall CPU, registers, x86 calling convention higher mem addresses +------------------+ | ... caller ... | | ... frame ... | +------------------+ | ... arg 3 ... | +------------------+ | ... arg 2 ... | +------------------+ | ... arg 1 ... | +------------------+ | return address | +------------------+ | saved %ebp | +------------------+ |callee s local var| +------------------+ | ... | +------------------+ caller: ; make new call frame push %ebp ; save old call frame mov %esp -> %ebp ; initialize new call frame push 3 push 2 push 1 call callee ; call subroutine 'callee add %esp, 12 ; remove arguments from frame add %eax, 5 ; modify subroutine result ; (stored in eax) mov %ebp -> %esp ; restore old call frame pop %ebp ; restore old call frame ret ; return entry %ebp --> | | stack | grows v down entry %esp --> new %ebp --> new %esp -->

  15. Case study: buffer overflows How does the adversary take advantage of this code? How does the adversary know the address of the buffer? Can we do something interesting beyond overflowing the buffer? What can the adversary do once they are executing code? Why would programmers write such code? How to avoid mechanism problems? Reduce the amount of security-critical code Avoid bugs in security-critical code

  16. What goes wrong #3 problems with the mechanism bugs How to solve? Reduce the amount of security-critical code Avoid bugs in security-critical code

More Related Content