Enhancing System Performance through Prefetching and Caching Strategies

 
A
t
h
a
n
a
s
i
o
s
 
E
.
 
P
a
p
a
t
h
a
n
a
s
i
o
u
 
a
n
d
 
M
i
c
h
a
e
l
 
L
.
 
S
c
o
t
t
P
r
o
c
.
 
U
S
E
N
I
X
 
0
4
 
A
n
n
.
 
T
e
c
h
n
i
c
a
l
 
C
o
n
f
.
,
 
J
u
n
e
 
2
0
0
4
.
 
1
 
Prefetching and caching
Increase throughput
Decrease latency
Ignore energy efficiency
 
2
 
Problem of traditional algorithm:
Standby mode:
 Standby to active transition:
Time:
 at least 3 seconds
Spin up power: 
1.5-2X (compares to active mode)
 
3
 
 
1.
Fetch on demand
2.
Traditional prefetching strategy
3.
Energy conscious strategy
 
4
 
1.
Fetch on demand:
Run time: 66 units
Number of misses: 6
Idle time:
6 interval, 10 unit
 
5
 
2.
Traditional Prefetching:
Run time: 61 units
Number of misses: 1
Idle time:
5 interval, 9 unit
1 interval, 8 unit
 
6
 
3.
Aggressive prefetching:
Run time: 61 units
Number of misses: 1
Idle time:
1 interval, 27 unit
1 interval, 28 unit
 
7
 
Traditional prefetching strategies:
when to fetch a block
which block to fetch
which block to replace.
four rules to performance-optimal prefetching:
1.
Optimal Prefetching
2.
Optimal Replacement
3.
Do no harm
4.
First Opportunity
 
8
 
accommodate the requirement of energy efficiency:
1.
Optimal Prefetching
2.
Optimal Replacement
3.
Do no harm
4.
Maximize Disk Utilization
initiate a prefetch after the completion of a fetch, if there are blocks
available for replacement.
5.
Respect Idle Time
 Never interrupt a period of inactivity unless:
The prefetch has to be performed because of performance.
A miss event is occured
 
9
 
Problem: miss in idle mode
Spin up!
Solution: fetch much more data
prefetch enough blocks to reduce miss during an
idle interval
 
10
10
 
1.
Deciding When to Prefetch
2.
Deciding What to Prefetch
3.
Deciding What to Replace
 
11
11
 
1.
Deciding When to Prefetch
when the disk is in idle mode 
 postpone prefetching
Epoch algorithm: has two phases:
a.
Active phas:
prefetching occures in this phase
To manage prefetching, the OS must:
1.
Predict future data accesses
2.
Compute the amount of memory for prefetching
3.
Free the required amount of memory (flushing dirty pages)
b.
Idle phase:
initiate a new prefetching cycle early enough to achieve
optimal performance
 
12
12
 
2.
Deciding What to Prefetch
 
Prediction is based on hints
 
 
 
 
a monitor daemon tracks the file activity for hints
 
13
13
 
3.
Deciding What to Replace:
parameters determine probability of wrong eviction:
1.
The reserved amount of memory
2.
Prefetching or future writes
miss types:
1.
Eviction miss (suggests that the prefetching depth should
decrease)
2.
Prefetch miss (suggests that the prefetching depth should
increase)
3.
Compulsory miss
 
14
14
 
Epoch
: Implement epoch prefetching
algorithm in the 
Linux kernel 
version 2.4.20
1.
Hinted Files: 
Disclosed by monitor or
application.
kernel automatically detects sequential accesses.
 
15
15
 
2.
Prefetch Thread
Problem: 
lack of coordination among requests
generated by different applications.
Solution:
 centralized entity that is responsible for
generating prefetching requests for all running
applications.
 
16
16
 
3.
Prefetch Cache
Each page in the prefetch cache has a timestamp
Indicates when it is expected to be accessed
page is moved to the standard LRU list when:
1.
page is referenced
2.
its timestamp is exceeded
 
17
17
 
4.
Eviction Cache:
Data structure to keep track of 
pages that are
evicted
retains the metadata of recently evicted pages
It can be used as an estimate of a suitable
prefetching depth
 
18
18
 
5.
Handling Write Activity
The original Linux kernel:
The update daemon runs every 5 seconds
current implementation: flushes all dirty buffers
once per minute
write-behind of dirty buffers belonging to the file
can be postponed until:
1.
file is closed
2.
process that opened the file exits
 
19
19
 
5.
Power Management Policy:
prefetch thread predicts the length of the idle phase
If ( predicted length>Standby breakeven time)
  
disk is set to the Standby state
power management algorithm monitors the prediction
accuracy
If (repeatedly mispredicts the length of the idle phase)
  
power management policy ignores prediction
 
20
20
 
21
21
 
22
22
Slide Note
Embed
Share

Explore the benefits of prefetching and caching in improving system throughput and reducing latency, while considering energy efficiency. Traditional algorithms are compared, along with strategies for optimal prefetching and replacement to enhance performance and disk utilization efficiency. Learn about accommodating energy efficiency requirements and when to initiate prefetch based on block availability or performance demands to maximize system performance.

  • System Performance
  • Prefetching
  • Caching Strategies
  • Energy Efficiency
  • Disk Utilization

Uploaded on Sep 18, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Athanasios E. Papathanasiou and Michael L. Scott Proc. USENIX 04 Ann. Technical Conf., June 2004. 1

  2. Prefetching and caching Increase throughput Decrease latency Ignore energy efficiency 2

  3. Problem of traditional algorithm: Standby mode: Standby to active transition: Time: Spin up power: Problem of traditional algorithm: Time: at least 3 seconds Spin up power: 1.5-2X (compares to active mode) 3

  4. 1. Fetch on demand 2. Traditional prefetching strategy 3. Energy conscious strategy 4

  5. 1. Fetch on demand: Run time: 66 units Number of misses: 6 Idle time: 6 interval, 10 unit Fetch on demand: 1. 5

  6. 2. Traditional Prefetching: Run time: 61 units Number of misses: 1 Idle time: 5 interval, 9 unit 1 interval, 8 unit Traditional Prefetching: 2. 6

  7. 3. Aggressive prefetching: Run time: 61 units Number of misses: 1 Idle time: 1 interval, 27 unit 1 interval, 28 unit Aggressive prefetching: 3. 7

  8. Traditional prefetching strategies: when to fetch a block which block to fetch which block to replace. four rules to performance-optimal prefetching: 1. Optimal Prefetching 2. Optimal Replacement 3. Do no harm 4. First Opportunity 8

  9. accommodate the requirement of energy efficiency: 1. 2. 3. 4. Optimal Prefetching Optimal Replacement Do no harm Maximize Disk Utilization initiate a prefetch after the completion of a fetch, if there are blocks available for replacement. 5. Never interrupt a period of inactivity unless: The prefetch has to be performed because of performance. A miss event is occured Respect Idle Time 9

  10. Problem: miss in idle mode Spin up! Solution: fetch much more data prefetch enough blocks to reduce miss during an idle interval 10

  11. 1. Deciding When to Deciding What to Prefetch Deciding What to Replace Deciding When to Prefetch Prefetch Prefetch 1. 2. Deciding What to 2. 3. Deciding What to Replace 3. 11

  12. Deciding When to when the disk is in idle mode postpone prefetching Epoch algorithm: has two phases: a. a. prefetching occures in this phase To manage prefetching, the OS must: 1. 2. 3. b. initiate a new prefetching cycle early enough to achieve optimal performance Deciding When to Prefetch Prefetch 1. 1. Active Active phas phas: : Predict future data accesses Compute the amount of memory for prefetching Free the required amount of memory (flushing dirty pages) Idle phase: b. Idle phase: 12

  13. Deciding What to Prediction is based on hints Deciding What to Prefetch Prefetch 2. 2. a monitor daemon tracks the file activity for hints 13

  14. Deciding What to Replace: parameters determine probability of wrong eviction: 1. 2. miss types: 1. decrease) 2. increase) 3. Deciding What to Replace: parameters determine probability of wrong eviction: The reserved amount of memory Prefetching or future writes miss types: Eviction miss (suggests that the prefetching depth should 3. 3. Prefetch miss (suggests that the prefetching depth should Compulsory miss 14

  15. Epoch algorithm in the Linux kernel Hinted Files: Disclosed by monitor or application. kernel automatically detects sequential accesses. Epoch: Implement epoch prefetching Linux kernel version 2.4.20 1. Hinted Files: 1. 15

  16. 2. Prefetch Problem: generated by different applications. Solution: generating prefetching requests for all running applications. Prefetch Thread Problem: lack of coordination among requests Thread 2. Solution: centralized entity that is responsible for 16

  17. 3. Prefetch Each page in the prefetch cache has a timestamp Indicates when it is expected to be accessed page is moved to the standard LRU list when: 1. 2. Prefetch Cache Cache 3. page is referenced its timestamp is exceeded 17

  18. 4. Eviction Cache: Data structure to keep track of pages that are evicted retains the metadata of recently evicted pages It can be used as an estimate of a suitable prefetching depth Eviction Cache: 4. 18

  19. 5. Handling Write Activity The original Linux kernel: The update daemon runs every 5 seconds current implementation: flushes all dirty buffers once per minute write-behind of dirty buffers belonging to the file can be postponed until: 1. 2. Handling Write Activity 5. file is closed process that opened the file exits 19

  20. Power Management Policy: prefetch thread predicts the length of the idle phase If ( predicted length>Standby breakeven time) disk is set to the Standby state power management algorithm monitors the prediction accuracy 5. If (repeatedly mispredicts the length of the idle phase) power management policy ignores prediction 20

  21. 21

  22. 22

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#