Enhancing System Performance through Prefetching and Caching Strategies
Explore the benefits of prefetching and caching in improving system throughput and reducing latency, while considering energy efficiency. Traditional algorithms are compared, along with strategies for optimal prefetching and replacement to enhance performance and disk utilization efficiency. Learn about accommodating energy efficiency requirements and when to initiate prefetch based on block availability or performance demands to maximize system performance.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
Athanasios E. Papathanasiou and Michael L. Scott Proc. USENIX 04 Ann. Technical Conf., June 2004. 1
Prefetching and caching Increase throughput Decrease latency Ignore energy efficiency 2
Problem of traditional algorithm: Standby mode: Standby to active transition: Time: Spin up power: Problem of traditional algorithm: Time: at least 3 seconds Spin up power: 1.5-2X (compares to active mode) 3
1. Fetch on demand 2. Traditional prefetching strategy 3. Energy conscious strategy 4
1. Fetch on demand: Run time: 66 units Number of misses: 6 Idle time: 6 interval, 10 unit Fetch on demand: 1. 5
2. Traditional Prefetching: Run time: 61 units Number of misses: 1 Idle time: 5 interval, 9 unit 1 interval, 8 unit Traditional Prefetching: 2. 6
3. Aggressive prefetching: Run time: 61 units Number of misses: 1 Idle time: 1 interval, 27 unit 1 interval, 28 unit Aggressive prefetching: 3. 7
Traditional prefetching strategies: when to fetch a block which block to fetch which block to replace. four rules to performance-optimal prefetching: 1. Optimal Prefetching 2. Optimal Replacement 3. Do no harm 4. First Opportunity 8
accommodate the requirement of energy efficiency: 1. 2. 3. 4. Optimal Prefetching Optimal Replacement Do no harm Maximize Disk Utilization initiate a prefetch after the completion of a fetch, if there are blocks available for replacement. 5. Never interrupt a period of inactivity unless: The prefetch has to be performed because of performance. A miss event is occured Respect Idle Time 9
Problem: miss in idle mode Spin up! Solution: fetch much more data prefetch enough blocks to reduce miss during an idle interval 10
1. Deciding When to Deciding What to Prefetch Deciding What to Replace Deciding When to Prefetch Prefetch Prefetch 1. 2. Deciding What to 2. 3. Deciding What to Replace 3. 11
Deciding When to when the disk is in idle mode postpone prefetching Epoch algorithm: has two phases: a. a. prefetching occures in this phase To manage prefetching, the OS must: 1. 2. 3. b. initiate a new prefetching cycle early enough to achieve optimal performance Deciding When to Prefetch Prefetch 1. 1. Active Active phas phas: : Predict future data accesses Compute the amount of memory for prefetching Free the required amount of memory (flushing dirty pages) Idle phase: b. Idle phase: 12
Deciding What to Prediction is based on hints Deciding What to Prefetch Prefetch 2. 2. a monitor daemon tracks the file activity for hints 13
Deciding What to Replace: parameters determine probability of wrong eviction: 1. 2. miss types: 1. decrease) 2. increase) 3. Deciding What to Replace: parameters determine probability of wrong eviction: The reserved amount of memory Prefetching or future writes miss types: Eviction miss (suggests that the prefetching depth should 3. 3. Prefetch miss (suggests that the prefetching depth should Compulsory miss 14
Epoch algorithm in the Linux kernel Hinted Files: Disclosed by monitor or application. kernel automatically detects sequential accesses. Epoch: Implement epoch prefetching Linux kernel version 2.4.20 1. Hinted Files: 1. 15
2. Prefetch Problem: generated by different applications. Solution: generating prefetching requests for all running applications. Prefetch Thread Problem: lack of coordination among requests Thread 2. Solution: centralized entity that is responsible for 16
3. Prefetch Each page in the prefetch cache has a timestamp Indicates when it is expected to be accessed page is moved to the standard LRU list when: 1. 2. Prefetch Cache Cache 3. page is referenced its timestamp is exceeded 17
4. Eviction Cache: Data structure to keep track of pages that are evicted retains the metadata of recently evicted pages It can be used as an estimate of a suitable prefetching depth Eviction Cache: 4. 18
5. Handling Write Activity The original Linux kernel: The update daemon runs every 5 seconds current implementation: flushes all dirty buffers once per minute write-behind of dirty buffers belonging to the file can be postponed until: 1. 2. Handling Write Activity 5. file is closed process that opened the file exits 19
Power Management Policy: prefetch thread predicts the length of the idle phase If ( predicted length>Standby breakeven time) disk is set to the Standby state power management algorithm monitors the prediction accuracy 5. If (repeatedly mispredicts the length of the idle phase) power management policy ignores prediction 20