Eraser: A Dynamic Data Race Detector for Multithreaded Programs
Eraser is a dynamic data race detector designed for multithreaded programs to identify timing-dependent data races caused by synchronization errors. The tool helps in detecting when multiple concurrent threads access shared variables without explicit mechanisms to prevent simultaneous accesses. It discusses the importance of multithreading, data races, happens-before relations, previous works like monitors, and introduces the Lockset Algorithm for ensuring proper locking discipline. This technology is presented in a structured manner with case studies to illustrate its effectiveness in detecting data race issues.
- Data Race Detector
- Multithreaded Programs
- Synchronization Errors
- Lockset Algorithm
- Happens-Before Relations
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
Eraser: A dynamic Data Race Detector for Multithreaded Programs Stefan Savage, Michael Burrows, Greg Nelson, Patrick Sobalvarro, Thomas Anderson Presenter: Chao Kong EECS 582 W16 1
Outline Introduction Previous works The Lockset Algorithm Eraser implementation Case Studies EECS 582 W16 2
Introduction Multithreading is prevalent. Microsoft Word and Netscape Navigator Multithreaded programs easily produce timing-dependent data races caused by simple synchronization errors. EECS 582 W16 3
Data Race Thread 1 Thread 2 A data race occurs when multiple concurrent threads access a shared variable : At least one access is a write No explicit mechanism to prevent simultaneous accesses EECS 582 W16 4
Previous works Monitors: Avoid data races of shared variables which are static globals Sun s lock_lint and Extended Static Checker for Modula-3 Work in dynamically allocated shared data Problematical: requires statically reasoning about the program s semantics Happens-before EECS 582 W16 5
Thread 1 Happens-before Happens-before relation Within single thread Between threads Accessing variables not ordered by happens-before relation leads to potential data race Thread 2 EECS 582 W16 6
Thread 1 Flaws of Happens-before y := y+1; Lock(mu); v := v+1; Difficult to implement Requires per-thread information Thread 2 Unlock(mu); Lock(mu); Dependent on the interleaving produced by the scheduler v := v+1; Unlock(mu); y := y+1; EECS 582 W16 7
Thread 2 Flaws of Happens-before Lock(mu); v := v+1; Unlock(mu); Difficult to implement Requires per-thread information Thread 1 y := y+1; y := y+1; Dependent on the interleaving produced by the scheduler Lock(mu); v := v+1; Unlock(mu); EECS 582 W16 8
Lockset Algorithm Locking discipline Every shared variable is protected by some locks Infer protection relation Infer which locks protect which variable from execution history. EECS 582 W16 9
Lockset Algorithm Example EECS 582 W16 10
Limitations Initialization shared variables are frequently initialized without holding a lock Read-shared Data Some shared variables are written during initialization only and are read-only thereafter Read-Write Locks Read-write locks allow multiple readers to access a shared variable, but allow only a single writer to do so. EECS 582 W16 11
Locks-held C(v) Read-Write Locks {} {mu1,mu2} Lock(mu1); v := v+1; {mu1,mu2} Lock(mu2); {mu1,mu2} w=v; Unlock(mu2); Unlock(mu1); {} Lock(mu2); {mu2} w := v; {mu2} Unlock(mu2); EECS 582 W16 12
Refinement-I Initialization Don t start until see a second thread Read-shared Data Report only after it becomes write shared EECS 582 W16 13
Refinement-II Reader-writer locking Change algorithm to reflect lock type Page 22 EECS 582 W16 14
Locks-held C(v) Read-Write Locks {} {mu1,mu2} Lock(mu1); v := v+1; {mu1} Lock(mu2); {mu1} w=v; Unlock(mu2); Unlock(mu1); {} Lock(mu2); {mu2} w := v; {} Unlock(mu2); EECS 582 W16 15
Implementation Binary rewriting used Add instrumentation to call Eraser runtime Calls to storage allocator initializes C(v) Each Acquire and Release call updates locks-held(t) Each load and store updates C(v) Storage explosion handled by table lookup and use of indexes to represent set Shadow word holds index number EECS 582 W16 16
Shadow Memory and Lockset Indexes EECS 582 W16 17
Performance Slowdown by factor of 10 to 30x overhead of making a procedure call at every load and store instruction change thread order affect the behavior of time-sensitive applications EECS 582 W16 18
Common false alarms - Annotations Memory reuse EraserReuse(address,size) Resets shadow word to virgin state Lock annotations EraserReadLock/Unlock(lock) EraserWriteLock/Unlock(lock) Private locks EraserlgnoreOn() EraserlgnoreOff() Benign race EECS 582 W16 19
Races inside OS Using interrupt system to provide mutual exclusion this implicitly locks everything affected (by interrupt level specified) Explicitly associate a lock with interrupt level disabling interrupt is like acquiring that lock Signal and wait kind of synchronization V to signal for P which waits -- semaphore not held by thread. EECS 582 W16 20
A Race for optimization in AltaVista EECS 582 W16 21
Data Race in Vesta EECS 582 W16 22
Conclusion The paper describes a new tool, called Eraser, for dynamically detecting data races in lock-based multithreaded programs. The paper evaluates the performance and overhead of Eraser The paper tells the experience of detecting data race by several case studies. EECS 582 W16 23
Questions Thank you !!! EECS 582 W16 24