Understanding the Challenges of Dynamic Software Analysis

Slide Note
Embed
Share

Delve into the world of dynamic software analyses, exploring the prevalence of software errors, modern bug examples, runtime overheads, and proposed solutions in this field. Learn about the complexities of analyzing programs as they run and the associated risks and considerations.


Uploaded on Sep 30, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Accelerating Dynamic Software Analyses Joseph L. Greathouse Ph.D. Candidate Advanced Computer Architecture Laboratory University of Michigan December 1, 2011

  2. Software Errors Abound NIST: SW errors cost U.S. ~$60 billion/year as of 2002 FBI CCS: Security Issues $67 billion/year as of 2005 > from viruses, network intrusion, etc. Cataloged Software Vulnerabilities 9000 CVE Candidates CERT Vulnerabilities 6000 3000 0 2000 2001 2002 2003 2004 2005 2006 2007 2008 2

  3. Example of a Modern Bug Thread 1 mylen=small Thread 2 mylen=large if(ptr == NULL) { Nov. 2010 OpenSSL Security Flaw len=thread_local->mylen; ptr=malloc(len); memcpy(ptr, data, len); } ptr 3

  4. Example of a Modern Bug Thread 1 mylen=small Thread 2 mylen=large if(ptr==NULL) if(ptr==NULL) len2=thread_local->mylen; ptr=malloc(len2); TIME len1=thread_local->mylen; ptr=malloc(len1); memcpy(ptr, data1, len1) memcpy(ptr, data2, len2) ptr LEAKED 4

  5. Dynamic Software Analysis Analyze the program as it runs + System state, find errors on any executed path LARGE runtime overheads, only test one path Analysis Instrumentation Program Instrumented Program In-House Test Machine(s) Developer Analysis Results LONG run time 5

  6. Runtime Overheads: How Large? Data Race Detection (e.g. Inspector XE) 2-300x Taint Analysis (e.g.TaintCheck) 2-200x Memory Checking (e.g. MemCheck) 5-50x Dynamic Bounds Checking 10-80x Symbolic Execution 10-200x 6

  7. Outline Problem Statement Background Information Demand-Driven Dynamic Dataflow Analysis Proposed Solutions Demand-Driven Data Race Detection Sampling to Cap Maximum Overheads 7

  8. Dynamic Dataflow Analysis Associate meta-data with program values Propagate/Clear meta-data while executing Check meta-data for safety & correctness Forms dataflows of meta/shadow information 8

  9. Example: Taint Analysis Data Input Meta-data Associate x = read_input() Clear validate(x) x = read_input() Propagate y = x * 1024 Check w Check w y = x * 1024 w = x + 42 a += y z = y * 75 Check z Check z a += y z = y * 75 Check a Check a 9

  10. Demand-Driven Dataflow Analysis Only Analyze Shadowed Data Native Application Instrumented Application No meta-data Non- Shadowed Data Shadowed Data Meta-Data Detection 10

  11. Finding Meta-Data No additional overhead when no meta-data Needs hardware support Take a fault when touching shadowed data Solution: Virtual Memory Watchpoints V P FAULT V P 11

  12. Results by Ho et al. lmbench Best Case Results: System Taint Analysis On-Demand Taint Analysis Slowdown (normalized) 101.7x 1.98x Results when everything is tainted: 200 Slowdown (x) 150 100 50 0 netcat_transmit netcat_receive ssh_transmit ssh_receive 12

  13. Outline Problem Statement Background Information Demand-Driven Dynamic Dataflow Analysis Proposed Solutions Demand-Driven Data Race Detection Sampling to Cap Maximum Overheads 13

  14. Software Data Race Detection Add checks around every memory access Find inter-thread sharing events Synchronization between write-shared accesses? No? Data race. 14

  15. Example of Data Race Detection Thread 1 mylen=small if(ptr==NULL) len1=thread_local->mylen; ptr=malloc(len1); memcpy(ptr, data1, len1) Thread 2 mylen=large ptr write-shared? Interleaved Synchronization? TIME if(ptr==NULL) len2=thread_local->mylen; ptr=malloc(len2); memcpy(ptr, data2, len2) 15

  16. SW Race Detection is Slow Phoenix PARSEC 300 Race Detector Slowdown (x) 250 200 150 100 50 0 16

  17. Inter-thread Sharing is Whats Important Data races ... are failures in programs that access and update shared datain critical sections Netzer & Miller, 1992 Thread-local data NO SHARING if(ptr==NULL) len1=thread_local->mylen; ptr=malloc(len1); memcpy(ptr, data1, len1) Shared data NO INTER-THREAD SHARING EVENTS TIME if(ptr==NULL) len2=thread_local->mylen; ptr=malloc(len2); memcpy(ptr, data2, len2) 17

  18. Very Little Inter-Thread Sharing Phoenix PARSEC 3 % Write-Sharing Events 2.5 2 1.5 1 0.5 0 18

  19. Use Demand-Driven Analysis! Multi-threaded Application Software Race Detector Inter-thread Local Access sharing Inter-thread Sharing Monitor 19

  20. Finding Inter-thread Sharing Virtual Memory Watchpoints? Inter-Thread Sharing ~100% of accesses cause page faults FAULT FAULT Granularity Gap Per-process not per-thread 20

  21. Hardware Sharing Detector Hardware Performance Counters Perf. Ctrs 1 2 -1 PEBS Debug Store 0 0 0 0 - - - - EFLAGS EIP RegVals MemInfo Pipeline Armed FAULT 1 Cache Precise Fault Intel s HITM event: W R Data Sharing Core 1 Core 2 S M S I HITM 21

  22. Potential Accuracy & Perf. Problems Limitations of Performance Counters HITM only finds W R Data Sharing Hardware prefetcher events aren t counted Limitations of Cache Events SMT sharing can t be counted Cache eviction causes missed events False sharing, etc PEBS events still go through the kernel 22

  23. Demand-Driven Analysis on Real HW Execute Instruction NO NO Disable Analysis HITM Interrupt? Analysis Enabled? YES YES NO Enable Analysis SW Race Detection Sharing Recently? YES 23

  24. Performance Increases Phoenix 51x PARSEC 20 Demand-driven Analysis 18 16 14 Speedup (x) 12 10 8 6 4 2 0 24

  25. Demand-Driven Analysis Accuracy 1/1 2/4 2/4 2/4 3/3 4/4 4/4 3/3 4/4 4/4 4/4 20 Demand-driven Analysis 18 16 14 Speedup (x) Accuracy vs. Continuous Analysis: 97% 12 10 8 6 4 2 0 25

  26. Outline Problem Statement Background Information Demand-Driven Dynamic Dataflow Analysis Proposed Solutions Demand-Driven Data Race Detection Sampling to Cap Maximum Overheads 26

  27. Reducing Overheads Further: Sampling Lower overheads by skipping some analyses 100 Detection Accuracy (%) 75 Ideal 50 25 0 No Analysis Overhead Complete Analysis 27

  28. Sampling Allows Distribution 100 Detection Accuracy (%) End Users Beta Testers 75 Many users testing at little overhead see more errors than one user at high overhead. Ideal 50 25 Developer 0 Overhead 28

  29. Cannot Navely Sample Code Input validate(x) x = read_input() Validate(x) x = read_input() Skip Instr. False Positive y = x * 1024 w = x + 42 Check w Check w y = x * 1024 w = x + 42 a += y False Negative a += y z = y * 75 Check z Check z Skip Instr. Check a Check a 29

  30. Solution: Sample Data, not Code Sampling must be aware of meta-data Remove meta-data from skipped dataflows Prevents false positives 30

  31. Dataflow Sampling Example Input x = read_input() validate(x) Skip Dataflow x = read_input() y = x * 1024 Check w Check w y = x * 1024 w = x + 42 Skip Dataflow a += y False Negative a += y z = y * 75 Check z Check z Check a Check a 31

  32. Dataflow Sampling Remove dataflows if execution is too slow Sampling Analysis Tool Native Application Instrumented Application Application Instrumented OH Threshold Clear meta-data Meta-Data Detection Meta-data 32

  33. Prototype Setup Taint analysis sampling system Network packets untrusted Xen-based demand analysis Whole-system analysis with modified QEMU Overhead Manager (OHM) is user-controlled Xen Hypervisor Admin VM OS and Applications Shadow Page Table Taint Analysis QEMU App App App Net Stack OHM Linux 33

  34. Benchmarks Performance Network Throughput Example: ssh_receive Accuracy of Sampling Analysis Real-worldSecurity Exploits Name Apache Eggdrop Lynx ProFTPD Heap smashing attack on ProFTPD Server Squid Heap smashing attack on Squid proxy server Error Description Stack overflow in Apache Tomcat JK Connector Stack overflow in Eggdrop IRC bot Stack overflow in Lynx web browser 34

  35. Performance of Dataflow Sampling ssh_receive 25 Throughput with no analysis 20 Throughput (MB/s) 15 10 5 0 0 20 40 60 80 100 Maximum % Time in Analysis 35

  36. Accuracy with Background Tasks ssh_receive running in background 100 % Chance of Detecting Exploit Apache Eggdrop Lynx ProFTPD Squid 80 60 40 20 2 0.7 0.1 0 10% 25% Maximum % Time in Analysis 50% 75% 90% 36

  37. BACKUP SLIDES 37

  38. Performance Difference Phoenix PARSEC 300 Race Detector Slowdown (x) 250 200 150 100 50 0 38

  39. Width Test 39

Related


More Related Content