Study of Garbage Collector Scalability on Multicores

Slide Note
Embed
Share

This study delves into the scalability challenges faced by garbage collectors on multicore hardware. It highlights how the performance of garbage collection does not scale effectively with the increasing number of cores, leading to bottlenecks in applications. The shift from centralized to distributed architectures is explored, emphasizing the impact of multicore, distributed systems on garbage collection performance.


Uploaded on Oct 07, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. A Study of Garbage Collector Scalability on Multicores LokeshGidra, Ga l Thomas, JulienSopena and Marc Shapiro INRIA/University of Paris 6

  2. Garbage collection on multicore hardware 14/20 most popular languages have GC but GC doesn t scale on multicore hardware Parallel Scavenge/HotSpot scalability on a 48-core machine Better GC Throughput (GB/s) Degrades after 24 GC threads SPECjbb2005 with 48 application threads/3.5GB GC threads Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 2

  3. Scalability of GC is a bottleneck By adding new cores, application creates more garbage per time unit And without GC scalability, the time spent in GC increases ~50% of the time spent in the GC at 48 cores Lusearch [PLOS 11] Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 3

  4. Where is the problem? Probably not related to GC design: the problem exists in ALL the GCs of HotSpot 7 (both, stop-the-world and concurrent GCs) What has really changed: Multicores are distributed architectures, not centralized anymore Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 4

  5. From centralized architectures to distributed ones A few years ago Now Inter-connect Node 0 Node 1 Memory Memory Memory System Bus Memory Memory Cores Node 3 Node 2 Cores Uniform memory access machines Non-uniform memory access machines Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 5

  6. From centralized architectures to distributed ones Our machine: AMD Magny-Cours with 8 nodes and 48 cores 12 GB per node 6 cores per node Node 0 Node 1 Memory Memory Memory Memory Node 3 Node 2 Local access: ~ 130 cycles Remote access: ~350 cycles Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 6

  7. From centralized architectures to distributed ones Our machine: AMD Magny-Cours with 8 nodes and 48 cores 12 GB per node 6 cores per node Node 0 Node 1 Memory Memory Time to perform a fixed number of reads in // Completion time (ms) Memory Memory Node 3 Node 2 Local access: ~ 130 cycles Remote access: ~350 cycles Better #cores = #threads Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 7

  8. From centralized architectures to distributed ones Our machine: AMD Magny-Cours with 8 nodes and 48 cores 12 GB per node 6 cores per node Node 0 Node 1 Memory Memory Time to perform a fixed number of reads in // Completion time (ms) Memory Memory Node 3 Node 2 Local access: ~ 130 cycles Remote access: ~350 cycles Local Access Better #cores = #threads Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 8

  9. From centralized architectures to distributed ones Our machine: AMD Magny-Cours with 8 nodes and 48 cores 12 GB per node 6 cores per node Node 0 Node 1 Memory Memory Time to perform a fixed number of reads in // Completion time (ms) Memory Memory Node 3 Node 2 Random access Local access: ~ 130 cycles Remote access: ~350 cycles Local Access Better #cores = #threads Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 9

  10. From centralized architectures to distributed ones Our machine: AMD Magny-Cours with 8 nodes and 48 cores 12 GB per node 6 cores per node Node 0 Node 1 Memory Memory Time to perform a fixed number of reads in // Completion time (ms) Single node access Memory Memory Node 3 Node 2 Random access Local access: ~ 130 cycles Remote access: ~350 cycles Local Access Better #cores = #threads Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 10

  11. Parallel Scavenge Heap Space Parallel Scavenge First-touch allocation policy Kernel s lazy first-touch page allocation policy Virtual address space Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 11

  12. Parallel Scavenge Heap Space Parallel Scavenge First-touch allocation policy Kernel s lazy first-touch page allocation policy initial sequential phase maps most pages on first node Initial application thread Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 12

  13. Parallel Scavenge Heap Space Parallel Scavenge First-touch allocation policy Kernel s lazy first-touch page allocation policy initial sequential phase maps most pages on its node But during the whole execution, the mapping remains as it is (virtual space reused by the GC) Initial application thread A severe problem for generational GCs Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 13

  14. Parallel Scavenge Heap Space Parallel Scavenge First-touch allocation policy Bad balance Bad locality 95% on a single node Better SpecJBB GC Throughput (GB/s) PS GC threads A Study of the Scalability of Garbage Collectors on Multicores Lokesh Gidra 14

  15. NUMA-aware heap layouts Parallel Scavenge Interleaved Fragmented First-touch allocation policy Round-robin allocation policy Node local object allocation and copy Bad balance Bad locality 95% on a single node Targets balance Targets locality Better SpecJBB GC Throughput (GB/s) PS GC threads Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 15

  16. Interleaved heap layout analysis Parallel Scavenge Interleaved Fragmented First-touch allocation policy Round-robin allocation policy Node local object allocation and copy Bad balance Bad locality 95% on a single node Perfect balance Bad locality 7/8 remote accesses Better SpecJBB GC Throughput (GB/s) Interleaved PS GC threads Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 16

  17. Fragmented heap layout analysis Parallel Scavenge Interleaved Fragmented First-touch allocation policy Round-robin allocation policy Node local object allocation and copy Bad balance Bad locality 95% on a single node Perfect balance Bad locality 7/8 remote accesses Good balance Average locality Bad balance if a single thread allocates for the others Better 100% local copies SpecJBB 7/8 remote scans GC Throughput (GB/s) Fragmented Interleaved PS GC threads Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 17

  18. Synchronization optimizations Removed a barrier between the GC phases Replaced the GC task-queue with a lock-free one Synchro optimization has effect with high contention Fragmented + synchro Better GC Throughput (GB/s) SpecJBB Fragmented Interleaved PS GC threads Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 18

  19. Effect of Optimizations on the App (GC excluded) PS Application time Other heap layouts XML Transform from SPECjvm Better GC threads A good balance improves a lot application time Locality has only a marginal effect on application While fragmented space increases locality for application over interleaved space (recently allocated objects are the most accessed) Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 19

  20. Overall effect (both GC and application) Fragmented + synchro Better Fragmented Interleaved throughput (ops/ms) Application PS SpecJBB GC threads Optimizations double the app throughput of SPECjbb Pause time divided in half (105ms to 49ms) Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 20

  21. GC scales well with memory-intensive applications 3.5GB 1GB 2GB 2GB 1GB 512MB PS Fragmented + synchro Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 21

  22. Take Away Previous GCs do not scale because they are not NUMA-aware Existing mature GCs can scale with standard // programming techniques Using NUMA-aware memory layouts should be useful for all GCs (concurrent GCs included) Most important NUMA effects 1. Balancing memory access 2. Memory locality only helps at high core count Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 22

  23. Take Away Previous GCs do not scale due to NUMA obliviousness Existing mature GCs can scale with standard // programming techniques Using NUMA-aware memory layouts should be useful for all GCs (concurrent GCs included) Most important NUMA effects 1. Balancing memory access 2. Memory locality at high core count Thank You Lokesh Gidra A Study of the Scalability of Garbage Collectors on Multicores 23

Related


More Related Content