Challenges in Computer Systems Security
Understanding security in computer systems involves achieving goals in the presence of adversaries. However, this task is challenging due to the need to guarantee policies in the face of realistic and open-ended threat models. Issues can arise from problems with policy formulation, assumptions in threat models, and the inability to achieve perfect security. Strategies to address these challenges include rigorous policy evaluation, explicit threat modeling, and managing security risks against benefits.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
1 Intro CS 230 Computer Systems Security Marco Canini
What is security? Achieving some goal in the presence of an adversary High-level plan for thinking about security: Policy Threat model Mechanism Result: no way for adversary within threat model to violate policy Note that goal has nothing to say about mechanism
Why is security hard? Negative goal Need to guarantee policy, assuming the threat model Difficult to think of all possible ways that attacker might break in Realistic threat models are open-ended (almost negative models) Contrast: easy to check whether a positive goal is upheld e.g., Alice can actually read file F Weakest link matters Iterative process: design, update threat model as necessary, etc.
What's the point if we can't achieve perfect security? What a system can do, and what a system cannot Manage security risk vs benefit Better security often makes new functionality practical and safe
What goes wrong #1: problems with the policy Example: Sarah Palin's email account Example: Mat Honan's accounts at Amazon, Apple, Google, etc. Example: Twitter's @N account hijacking How to solve? Think hard about implications of policy statements Some policy checking tools can help, but need a way to specify what's bad Difficult in distributed systems: don't know what everyone is doing
What goes wrong #2: problems with threat model / assumptions Example: human factors not accounted for Example: computational assumptions change over time Example: assuming your hardware is trustworthy Example: all SSL certificate CAs are fully trusted Example: machines disconnected from the Internet are secure? Stuxnet: Anatomy of a Computer Virus Example: assuming good randomness for cryptography Example: subverting military OS security Example: subverting firewalls
What goes wrong #2: problems with threat model / assumptions How to solve? More explicit threat models, to understand possible weaknesses. Simpler, more general threat models. Better designs may eliminate / lessen reliance on certain assumptions. E.g., alternative trust models that don't have fully-trusted CAs. E.g., authentication mechanisms that aren't susceptible to phishing.
What goes wrong #3 problems with the mechanism bugs Example: Apple's iCloud password-guessing rate limits Example: Missing access control checks in Citigroup's credit card web site Example: Android's Java SecureRandom weakness leads to Bitcoin theft Example: bugs in sandbox (NaCl, Javascript, Java runtime) Example: Moxie Marlinspike's SSL certificate name checking bug Example: buffer overflows Let s see a case study
Case study: buffer overflows Consider a web server Often times, the web server's code is responsible for security E.g., checking which URLs can be accessed, checking SSL client certs, etc. Thus, bugs in the server's code can lead to security compromises
Case study: buffer overflows What's the threat model, policy? Assume that adversary can connect to web server, supply any inputs Threat model: adversary doesn't have access to machine but can send any request it wants Policy is a bit fuzzy: only perform operations intended by programmer? E.g., don't want adversary to steal data, bypass checks, install backdoors What the code does: overly specific if there is a bug, the system does what the wrong code does. Web server software is the mechanism: it enforces the policy
Case study: buffer overflows Consider the following simplified example code from a web server: int read_req(void) { char buf[128]; int i; gets(buf); i = atoi(buf); return i; } What can go wrong?
Recall CPU, registers, x86 calling convention $esp: stack pointer, points to the last thing on the stack $ebp: base frame pointer, used to save the previous version of $esp $eip: instruction pointer, it s the next instruction executed by CPU int callee(int, int, int); int caller(void) { return callee(1, 2, 3) + 5; }
Recall CPU, registers, x86 calling convention caller: ; make new call frame push %ebp ; save old call frame mov %esp -> %ebp ; initialize new call frame push 3 push 2 push 1 call callee ; call subroutine 'callee add %esp, 12 ; remove arguments from frame add %eax, 5 ; modify subroutine result ; (stored in eax) mov %ebp -> %esp ; restore old call frame pop %ebp ; restore old call frame ret ; return $esp: stack pointer, points to the last thing on the stack $ebp: base frame pointer, used to save the previous version of $esp int callee(int, int, int); int caller(void) { return callee(1, 2, 3) + 5; }
Recall CPU, registers, x86 calling convention higher mem addresses +------------------+ | ... caller ... | | ... frame ... | +------------------+ | ... arg 3 ... | +------------------+ | ... arg 2 ... | +------------------+ | ... arg 1 ... | +------------------+ | return address | +------------------+ | saved %ebp | +------------------+ |callee s local var| +------------------+ | ... | +------------------+ caller: ; make new call frame push %ebp ; save old call frame mov %esp -> %ebp ; initialize new call frame push 3 push 2 push 1 call callee ; call subroutine 'callee add %esp, 12 ; remove arguments from frame add %eax, 5 ; modify subroutine result ; (stored in eax) mov %ebp -> %esp ; restore old call frame pop %ebp ; restore old call frame ret ; return entry %ebp --> | | stack | grows v down entry %esp --> new %ebp --> new %esp -->
Case study: buffer overflows How does the adversary take advantage of this code? How does the adversary know the address of the buffer? Can we do something interesting beyond overflowing the buffer? What can the adversary do once they are executing code? Why would programmers write such code? How to avoid mechanism problems? Reduce the amount of security-critical code Avoid bugs in security-critical code
What goes wrong #3 problems with the mechanism bugs How to solve? Reduce the amount of security-critical code Avoid bugs in security-critical code