Understanding Reference Counting vs. Tracing in Memory Management

reference counting vs tracing n.w
1 / 23
Embed
Share

Explore the comparison between reference counting and tracing, fundamental garbage collection algorithms. Dive into the challenges, design considerations, and maintenance aspects. Discover insights on storing counts, maximum reference count distribution, and handling overflow scenarios in memory management.

  • Memory Management
  • Garbage Collection
  • Reference Counting
  • Tracing
  • Performance

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Reference Counting vs. Tracing

  2. The Challenge One of the two fundamental GC algorithms Many advantages Neglected by all performance-conscious VMs So how much slower is it? 30% 2

  3. Understanding Reference Counting 3

  4. Design Space Storing the count A word per object Some available bits Maintaining the count Naive Deferred Coalescing Generational and Age-Oriented Cycles Backup tracing Trial deletion 4

  5. 1. Storing the Count Space Dedicated word (32 bits) per object Steal bits from each object s header How many bits we actually need? 5

  6. Maximum Reference Count Distribution 100 compress db mpegaudio jack bloat eclipse hsqldb lusearch pmd xalan jess javac mtrt avrora chart fop luindex pjbb2005 sunflow mean 90 80 percentage of objects 70 60 50 40 30 20 10 0 0 1 2 3 4 5 6 7 8-15 16-31 32-63 > 63 maximum reference count Vast majority of objects have max counts of 7 or less 6

  7. Overflow 100 95 compress db mpegaudio jack bloat eclipse hsqldb lusearch pmd xalan jess javac mtrt avrora chart fop luindex pjbb2005 sunflow mean 90 85 percentage of overflowed objects 80 75 70 65 60 55 50 45 40 35 30 25 20 15 10 5 0 1 2 3 4 5 number of bits used 4 bits: 0.11% overflow . But these attract 18% of incs and decs! 7

  8. 2. Maintaining the Count Naive RC requires inc and dec on every pointer mutation Easy to implement only write barriers, no GC maps Deferred RCignores changes to stacks and register Temporary increment used to fake liveness of roots Normal incs and decs deferred until collection time Much faster than naive but requires GC maps Coalescing RC ignores many changes to heap Coalesce pointer mutations record just first and last in chain of changes 8

  9. Sources of Incs and Decs New objects Mutations to: non-new scalar objects non-new array objects Temporary operations due to root reachability Cycle collection-generated decrements 9

  10. Increments due to young objects 71% increments for newly allocated objects 10

  11. Decrements due to young objects 71% decrements for newly allocated objects 11

  12. Survival ratio Only about 10% survive 12

  13. Improving Reference Counting 13

  14. 1. Storing the Count Findings: Max count < 8 in most cases Very few overflows The percentage of stuck objects is very low Design: Handling stuck objects Auxiliary data structure to store count of the overflowedobjects Ignore them let backup tracing collect stuck objects Restore them let backup tracing restore stuck counts 14

  15. Limited bit strategies heap size = 2x the minimum heap size 5% faster 15

  16. 2. Maintaining the Count Findings: New objects are the source of most incs and decs Survival ratio is low New objects a fruitful focus [Azatchi & Petrank 03] [Blackburn & McKinley 03] 16

  17. Existing Handling of New Objects Implicitly dirty Marked as dirty and enqueued Inc applied to each referent at next collection Implicitly live Initial count of one Dec enqueued, processed at next collection 17

  18. New: Handling of New Objects Implicitly clean Lazily dirty at collection time only if (transitively) inc d by old object or roots Non-surviving objects never processed Implicitly dead Lazily increment at collection time only if (transitively) inc d by old object or roots Available to free list unless they are incremented Minor change to existing RC, not hybrid of two collectors 18

  19. Implicitly Clean & Implicitly Dead heap size = 2x the minimum heap size 16% &19% faster respectively 19

  20. Standard vs. Improved RC heap size = 2x the minimum heap size 24% faster (i.e. standard is 30% slower) 20

  21. Results 21

  22. RC vs. MS heap size = 2x the minimum heap size 22

  23. RC vs. MS heap size = 2x the minimum heap size New RC MS 23

Related


More Related Content