Orchestrated Scheduling and Prefetching for GPGPUs

Slide Note
Embed
Share

This paper discusses the implementation of an orchestrated scheduling and prefetching mechanism for GPGPUs to enhance system performance by improving IPC and overall warp scheduling policies. It presents a prefetch-aware warp scheduler proposal aiming to make a simple prefetcher more capable, resulting in a 25% average IPC improvement compared to conventional warp scheduling policies. The study covers cache-conscious scheduling, multi-threading caching, main memory prefetching, and thread-block-aware scheduling, providing insights into improving memory scheduling policies.


Uploaded on Sep 15, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Orchestrated Scheduling and Prefetching for GPGPUs Adwait Jog, Onur Kayiran, Asit Mishra, Mahmut Kandemir, Onur Mutlu, Ravi Iyer, Chita Das

  2. Parallelize your code! Improve Replacement Policies Launch more threads! Multi- threadingCaching Is the Warp Scheduler aware of these techniques? Main Memory Prefetching Improve Prefetcher (look deep in the future, if you can!) Improve Memory Scheduling Policies

  3. Cache-Conscious Scheduling, MICRO 12 Two-level Scheduling MICRO 11 Multi- threadingCaching Aware Warp Scheduler Main Memory Prefetching ? Thread-Block-Aware Scheduling (OWL) ASPLOS 13

  4. Our Proposal Prefetch Aware Warp Scheduler Goals: Make a Simple prefetcher more Capable Improve system performance by orchestrating scheduling and prefetching mechanisms 25% average IPC improvement over Prefetching + Conventional Warp Scheduling Policy 7% average IPC improvement over Prefetching + Best Previous Warp Scheduling Policy 4

  5. Outline Proposal Background and Motivation Prefetch-aware Scheduling Evaluation Conclusions 5

  6. High-Level View of a GPU Streaming Multiprocessors (SMs) W W W CTA CTA Threads W CTA W CTA W Warps Scheduler Prefetcher ALUs L1 Caches Cooperative Thread Arrays (CTAs) Or Thread Blocks Interconnect L2 cache DRAM 6

  7. Warp Scheduling Policy Equal scheduling priority Round-Robin (RR) execution Problem: Warps stall roughly at the same time W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 SIMT Core Stalls Compute Phase (2) Compute Phase (1) D1 D2 DRAM Requests D3 D4 D5 D6 D7 Time D8 7

  8. W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 SIMT Core Stalls Compute Phase (2) Compute Phase (1) D1 D2 D3 DRAM Requests D4 D5 D6 D7D8 Time W 2 W 3 W 4 W 5 W 6 W 7 W 8 W 2 W 3 W 4 W 5 W 6 W 7 W 8 W 1 W 1 Group 2 TWO LEVEL (TL) SCHEDULING Group 2 Group 1 Group 1 Comp. Phase (2) Comp. Phase (2) Compute Phase (1) Compute Phase (1) Saved Cycles D1 D2 D3 DRAM Requests D4 D5 D6 D7 D8

  9. Accessing DRAM X + 1 Y + 2 X + 2 X + 3 Y + 3 Y + 1 Memory Addresses X Y High Bank- Level Parallelism W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 Legend Group 1 High Row Buffer Locality Group 2 Bank 1 Bank 2 Low Bank-Level Parallelism W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 Idle for a period Bank 2 High Row Buffer Locality Bank 1

  10. Warp Scheduler Perspective (Summary) Warp Scheduler Forms Multiple Warp Groups? DRAM Bandwidth Utilization Bank Level Parallelism Row Buffer Locality Round- Robin (RR) Two-Level (TL) 10

  11. Evaluating RR and TL schedulers Round-robin (RR) Two-level (TL) 7 IPC Improvement factor with Perfect L1 Cache Can we further reduce this gap? 6 5 2.20X 1.88X 4 3 2 Via Prefetching ? 1 0 JPEG KMN FFT GMEA SCP BLK BFSR FWT SPMV SSC PVC N 11

  12. (1) Prefetching: Saves more cycles (B) (A) TL RR Comp. Phase (2) Comp. Phase (2) Compute Phase (1) Compute Phase (1) Saved D1 D2 D3 Compute DRAM Requests Cycles Phase-2 (Group-2) D4 D5 D6 D7 D8 Time Can Start Comp. Phase (2) Comp. Phase (2) Compute Phase (1) Compute Phase (1) Saved Cycles D1 D2 D3 D4 P5 DRAM Requests P6 P7 P8 Prefetch Requests

  13. (2) Prefetching: Improve DRAM Bandwidth Utilization Y + 2 X + 1 X + 2 X + 3 Y + 3 Y + 1 Memory Addresses X Y W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 Idle for a period No Idle period! Bank 1 Bank 2 High Bank- Level Parallelism W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 Prefetch Requests High Row Buffer Locality Bank 1 Bank 2

  14. Challenge: Designing a Prefetcher Y + 2 X + 1 X + 2 X + 3 Y + 3 Y + 1 Memory Addresses X Y X Y W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 Prefetch Requests Bank 1 Bank 2 Sophisticated Prefetcher Y X

  15. Our Goal Keep the prefetcher simple, yet get the performance benefits of a sophisticated prefetcher. To this end, we will design a prefetch-aware warp scheduling policy Why? A simple prefetching does not improve performance with existing scheduling policies. 15

  16. RR Simple Prefetching + RR scheduling Compute Phase (1) Compute Phase (2) D1 D2 D3 No Saved Cycles DRAM Requests D4 D5 D6 D7 D8 Time Overlap with D2 (Late Prefetch) Overlap with D4 (Late Prefetch) Compute Phase (1) Compute Phase (2) D1 P2 D3 DRAM Requests P4 D5 P6 D7 Time P8

  17. Simple Prefetching + TL scheduling TL RR Group 2 Comp. Phase (2) Group 1 Compute Phase (1) Group 1 Comp. Phase (2) Group 2 Compute Phase (1) Saved Cycles D1 D2 D3 No Saved Cycles (over TL) DRAM Requests D4 D5 D6 D7 D8 Time Group 2 Comp. Phase (2) Group 1 Compute Phase (1) Overlap with D2 (Late Prefetch)Overlap with D4 (Late Prefetch) Group 1 Comp. Phase (2) Group 2 Compute Phase (1) D1 P2 D3 P4 D5 P6 D7 P8

  18. Lets Try X + 4 Simple Prefetcher X 18

  19. Simple Prefetching with TL scheduling Y + 2 X + 1 X + 2 X + 3 Y + 3 Y + 1 Memory Addresses X Y W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 May not be equal to Useless Prefetch (X + 4) Idle for a period X + 4 Y Bank 1 Bank 2 W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 Useless Prefetches UP1 UP2 UP3 UP4 Bank 1 Bank 2

  20. Simple Prefetching with TL scheduling TL RR Comp. Phase (2) Comp. Phase (2) Compute Phase (1) Compute Phase (1) Saved Cycles D1 D2 D3 DRAM Requests D4 D5 D6 D7 D8 Time No Saved Cycles Comp. Phase (2) (over TL) Comp. Phase (2) Compute Phase (1) Compute Phase (1) D1 D2 D3 Useless Prefetches D4 U5 DRAM Requests U6 U7 U8 D5 D6 D7 D8

  21. Warp Scheduler Perspective (Summary) Warp Scheduler Forms Multiple Warp Groups? Simple Prefetcher Friendly? DRAM Bandwidth Utilization Bank Level Parallelism Row Buffer Locality Round- Robin (RR) Two-Level (TL) 21

  22. Our Goal Keep the prefetcher simple, yet get the performance benefits of a sophisticated prefetcher. To this end, we will design a prefetch-aware warp scheduling policy A simple prefetching does not improve performance with existing scheduling policies. 22

  23. Sophisticated Prefetcher Prefetch Aware (PA) Warp Scheduler Simple Prefetcher 23

  24. Prefetch-aware (PA) warp scheduling Y + 2 X + 1 X + 2 X + 3 Y + 3 Y + 1 X Y Round Robin Scheduling W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 Group 1 Y + 2 X + 1 X + 2 X + 3 Y + 3 Y + 1 X Y Group 2 Two-level Scheduling W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 See paper for generalized algorithm of PA scheduler Y + 2 X + 1 X + 3 X + 2 Y + 3 Y + 1 X Y Prefetch-aware Scheduling W 5 W 7 W 2 W 4 W 6 W 8 W 1 W 3 Non-consecutive warps are associated with one group

  25. Simple Prefetching with PA scheduling X + 1 Y + 2 X + 2 X + 3 Y + 3 Y + 1 X Y W 5 W 7 W 1 W 2 W 3 W 4 W 6 W 8 Bank 2 Bank 1 Reasoning of non-consecutive warp grouping is that groups can (simple) prefetch for each other (green warps can prefetch for red warps using simple prefetcher) X X + 1 Simple Prefetcher

  26. Simple Prefetching with PA scheduling X + 2 X + 3 X + 1 Y + 2 Y + 3 Y + 1 X Y W 5 W 7 W 1 W 2 W 3 W 4 W 6 W 8 Cache Hits! Bank 1 Bank 2 X X + 1 Simple Prefetcher

  27. Simple Prefetching with PA scheduling (A) (B) TL RR Comp. Phase (2) Comp. Phase (2) Compute Phase (1) Compute Phase (1) Saved D1 D3 D5 Compute DRAM Requests Cycles Phase-2 (Group-2) D7 D2 D4 D6 D8 Time Can Start Comp. Phase Cycles!!! (over TL) Comp. Phase (2) (2)Saved Compute Phase (1) Compute Phase (1) Saved Cycles D1 D3 D5 D7 P2 DRAM Requests P4 P6 P8 Prefetch Requests

  28. DRAM Bandwidth Utilization X + 2 X + 3 X + 1 Y + 2 Y + 3 Y + 1 18% increase in bank-level parallelism X Y W 5 W 7 W 1 W 2 W 3 W 4 W 6 W 8 24% decrease in row buffer locality Bank 1 Bank 2 High Bank-Level Parallelism High Row Buffer Locality X X + 1 Simple Prefetcher

  29. Warp Scheduler Perspective (Summary) Warp Scheduler Forms Simple Prefetcher Friendly? DRAM Bandwidth Utilization Bank Level Parallelism Multiple Warp Groups? Row Buffer Locality Round- Robin (RR) Two-Level (TL) Prefetch- Aware (PA) (with prefetching) 29

  30. Outline Proposal Background and Motivation Prefetch-aware Scheduling Evaluation Conclusions 30

  31. Evaluation Methodology Evaluated on GPGPU-Sim, a cycle accurate GPU simulator Baseline Architecture 30 SMs, 8 memory controllers, crossbar connected 1300MHz, SIMT Width = 8, Max. 1024 threads/core 32 KB L1 data cache, 8 KB Texture and Constant Caches L1 Data Cache Prefetcher, GDDR3@1100MHz Applications Chosen from: Mapreduce Applications Rodinia Heterogeneous Applications Parboil Throughput Computing Focused Applications NVIDIA CUDA SDK GPGPU Applications 31

  32. Spatial Locality Detector based Prefetching MACRO BLOCK Prefetch:- Not accessed (demanded) Cache Lines X D X + 1 X + 2 X + 3 P Prefetch-aware Scheduler Improves effectiveness of this simple prefetcher See paper for more details D P D = Demand, P = Prefetch 32

  33. Improving Prefetching Effectiveness PA+Prefetching TL+Prefetching RR+Prefetching Fraction of Late Prefetches Prefetch Accuracy 69% 86% 89% 85% 90% 89% 100% 100% 80% 80% 60% 60% 40% 40% 20% 20% 0% 0% 20% 16% Reduction in L1D Miss Rates 15% 10% 4% 2% 5% 0% 33

  34. Performance Evaluation Results are Normalized to RR scheduling RR+Prefetching TL TL+Prefetching Prefetch-aware (PA) 1.01 PA+Prefetching 1.19 1.20 1.16 1.26 3 2.5 2 1.5 1 0.5 BLK GMEAN SSC PVC See paper for Additional Results KMN BFSR SCP FFT JPEG SPMV FWT 25% IPC improvement over Prefetching + RR Warp Scheduling Policy (Commonly Used) 7% IPC improvement over Prefetching + TL Warp Scheduling Policy (Best Previous) 34

  35. Conclusions Existing warp schedulers in GPGPUs cannot take advantage of simple prefetchers Consecutive warps have good spatial locality, and can prefetch well for each other But, existing schedulers schedule consecutive warps closeby in time prefetches are too late We proposed prefetch-aware (PA) warp scheduling Key idea: group consecutive warps into different groups Enables a simple prefetcher to be timely since warps in different groups are scheduled at separate times Evaluations show that PA warp scheduling improves performance over combinations of conventional (RR) and the best previous (TL) warp scheduling and prefetching policies Better orchestrates warp scheduling and prefetching decisions 35

  36. THANKS! QUESTIONS? 36

  37. BACKUP 37

  38. Effect of Prefetch-aware Scheduling Percentage of DRAM requests (averaged over group) with: 1 miss 2 misses 3-4 misses to a macro-block Recovered by Prefetching 60% High Spatial Locality Requests 40% High Spatial Locality Requests 20% 0% Two-level Prefetch-aware 38

  39. Working (With Two-Level Scheduling) MACRO BLOCK MACRO BLOCK X Y D D X + 1 X + 2 X + 3 Y + 1 Y + 2 Y + 3 D D High Spatial Locality Requests D D D D 39

  40. Working (With Prefetch-Aware Scheduling) MACRO BLOCK MACRO BLOCK X Y D D X + 1 X + 2 X + 3 Y + 1 Y + 2 Y + 3 P P High Spatial Locality Requests D D P P

  41. Working (With Prefetch-Aware Scheduling) MACRO BLOCK MACRO BLOCK X Y Cache Hits X + 1 X + 2 X + 3 Y + 1 Y + 2 Y + 3 D D D D

  42. Effect on Row Buffer locality TL TL+Prefetching PA PA+Prefetching 12 Row Buffer Locality 10 8 6 4 2 0 AVG BLK SSC PVC FFT BFSR KMN JPEG SPMV SCP FWT 24% decrease in row buffer locality over TL 42

  43. Effect on Bank-Level Parallelism RR TL PA Bank Level Parallelism 25 20 15 10 5 0 BLK AVG SSC PVC KMN SCP JPEG BFSR FWT SPMV FFT 18% increase in bank-level parallelism over TL 43

  44. Simple Prefetching + RR scheduling Y + 2 X + 1 X + 2 X + 3 Y + 3 Y + 1 Memory Addresses X Y W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 Bank 1 Bank 2 W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 Bank 2 Bank 1

  45. Simple Prefetching with TL scheduling Y + 2 X + 1 X + 2 X + 3 Y + 3 Y + 1 Memory Addresses X Y W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 Idle for a period Legend Bank 1 Bank 2 Group 1 W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 Group 2 Idle for a period Bank 2 Bank 1

  46. CTA-Assignment Policy (Example) Multi-threaded CUDA Kernel CTA-1 CTA-2 CTA-3 CTA-4 SIMT Core-1 SIMT Core-2 CTA-4 CTA-3 CTA-1 CTA-2 Warp Scheduler Warp Scheduler L1 Caches ALUs L1 Caches ALUs 46

Related