
ICCAD 2019 LEF/DEF-Based Global Routing Contest Insights
Delve into the details of the ICCAD 2019 LEF/DEF-Based Global Routing Contest, exploring the evolution of academic contests, real-world global route challenges, preliminary concepts like GCells and Route Guides, and more.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
ICCAD 2019 LEF/DEF Based Global Routing Contest Sergei Dolgov, Alexander Volkov, Lutong Wang and Bangqi Xu Mentor Graphics and UC San Diego Special thanks to Cadence Academic Network for tool sponsorship.
Outline Background Problem Introduction Benchmark Suite Characteristics Evaluation Contest Results Acknowledgments 2
Background: Evolution of Academic Contests Routing is divided into two stages Global routing (GR) Detailed routing (DR) ISPD-2018 / 2019 DR contests led to academic detailed routers Dr. CU DRAPS TritonRoute GR contest is a natural extension of DR contests 3
Background: Real-World Global Route Challenge Recent efforts in academia toward complete RTL-to-GDS flow IEEE CEDA DATC Robust Design Flow (RDF) OpenROAD project OpenROAD aims to enable open platform focusing on place-and-route Full support of industry-standard input / output formats In a real flow, the GR solution is evaluated directly by detailed routing More accurate resource modeling is needed ICCAD 2019 LEF/DEF based GR contest was formulated to help address real-world challenges with academic research 4
Outline Background Problem Introduction Benchmark Suite Characteristics Evaluation Contest Results Acknowledgments 5
Preliminaries: GCells Grid cells (GCells) definition using DEF syntax GCELLGRID X <start> DO <numColumns+1> STEP <space> ; GCELLGRID Y <start> DO <numRows+1> STEP <space> ; A typical GCell dimension is 15 15 M2 tracks Example of GCell definitions 6
Preliminaries: Route Guides (Route) guide Rectangle on a specific metal (routing) layer Cover one or multiple contiguous GCells Constraints A guide must cover the whole GCell area if it has non-zero-area overlap with a GCell Guides are on metal (routing) layers (i.e., guides on via layers are implicit) A valid GR solution for a net is a collection of guides where All pins of the net is covered by guides A connected graph can be derived for the net given the GR solution 7
Preliminaries: Example of a GR Solution Example of valid GR solution in Text format 3D view 8
Problem Definition Inputs: DEF: placement, netlist, PDN, GCell definition, etc. LEF: MACRO definition, layer definition, design rules, etc. Output: global routing solution in guide format Subject to: Connectivity constraints Routing resource constraints Constraints: (i) Connectivity is hard constraint (ii) Routing resource modeling (e.g., overflow) is soft constraint, but affects DR quality and runtime a lot 9
More on Connectivity Constraint Two route guides are considered connected if They touch each other on the same metal layer with non-zero overlapping edge length or area They are on neighboring metal layers with a non-zero overlapping area (projected to x-y plane) Illustrations of guide connectivity constraint 10
Outline Background Problem Introduction Benchmark Suite Characteristics Evaluation Contest Results Acknowledgments 11
Benchmark Suite Characteristics Contest benchmarks are (derived) from ISPD18/19 contests Synthesized using 32nm technology and cell libraries Range from 72K nets to 895K nets Six more challenging designs are adapted from ISPD18/19 testcases by limiting the top routing layer to be Metal5 Power/Ground nets are present Tracks are blocked if they are overlapping with PG nets Moreover, track supply is affected depending on design rules 12
Benchmark Suite Characteristics Final evaluation is based on six released testcases and six hidden testcases Bechmark ispd18_test5 ispd18_test8 ispd18_test10 ispd19_test7 ispd19_test8 ispd19_test9 ispd18_test5_metal5 ispd18_test8_metal5 ispd18_test10_metal5 ispd19_test7_metal5 ispd19_test8_metal5 ispd19_test9_metal5 #std 71985 191987 290386 359746 539611 899341 71985 191987 290386 359746 539611 899341 #blk 0 16 0 16 16 16 0 16 0 16 16 16 #net 72394 179863 182000 358720 537577 895252 72394 179863 182000 358720 537577 895252 #pin 1211 1211 1211 2216 3221 3221 1211 1211 1211 2216 3221 3221 #layer 9 9 9 9 9 9 5 5 5 5 5 5 GCell (M2 Track) 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 Released testcases Hidden testcases 13
Outline Background Problem Introduction Benchmark Suite Characteristics Evaluation Contest Results Acknowledgments 14
Evaluation Flow GR solution is evaluated following Dr. CU for DR solution Innovus v19.1 for violation report Metric evaluator for final score LEF DEF Contestant GR binary GR solution Dr. CU DR solution Innovus v19.1 Violation report Metric Evaluator Score 15
Evaluation Metrics Raw score of a routing solution is weighted using the table at right: Metric Weight Shorted metal / cut area Number of shorted metal / cut Number of min-area violations Number of parallel run length violations Number of EOL spacing violations Number of cut spacing violations Number of corner spacing violations Total length of out-of-guide wires Total number of out-of-guide vias Total length of off-track wires Total number of off-track vias Total length of wrong-way wires Total number of single-cut vias Total length of wires 500 500 500 500 500 500 500 1 1 0.5 1 1 2 / 4 0.5 Meld of ISPD18, ISPD19 evaluation criteria 16
Evaluation Metrics Final scaled score is calculated using the equation from ISPD18/19 Final_score = raw_score * (1 + nondeterministic_penalty + runtime_factor) Each GR binary is run three times to check nondeterminism A 3% nondeterministic penalty is applied if GR solution is not deterministic Runtime factor is calculated using the following equation raw_wall_time mean_wall_time ) ), where runtime_factor = min(0.1, max(-0.1, 0.02 log2 raw_wall_time is the sum of global and detailed routing runtimes mean_wall_time is the average sum of global and detailed routing runtimes across all teams 17
Ranking Method Final ranking is calculated similarly as in ISPD18/19 contests Team with lower final scaled score has higher (better) ranking Testcase without output receives lowest ranking Each team s lowest ranking is discarded Teams are ranked using the averaged ranking among remaining rankings The team with highest ranking is the winner Example Raw Score Raw Ranking Final Ranking Team 1 1 1 3 3 1.7 Team 2 2 3 1 1 1.3 Team 3 3 2 2 3 2.3 Team 1 1 1 3 3 Team 2 2 3 1 1 Team 3 3 2 2 3 Team 1 100 150 N/A N/A Team 2 200 300 400 500 Team 3 300 250 450 N/A Test1 Test2 Test3 test4 Avg. Test1 Test2 Test3 test4 Test1 Test2 Test3 Test4 18
Outline Background Problem Introduction Benchmark Suite Characteristics Evaluation Contest Results Acknowledgments 19
Contest Results Top 3 teams have cash reward 1st US$ 1,650 2nd US$ 1,000 3rd US$ 1,000 Top 3 teams have the option to open-source their GR for bonus 1st US$ 5,500 2nd US$ 1,500 3rd US$ 1,000 20
Result Overview 11 submissions, 7 teams have outputs Solution comparison Team A B C D E F G test1 Yes Yes Yes Yes Yes Yes Yes test2 No Sol No Sol Yes Yes Yes Yes Yes Yes test3 test4 Yes Yes Yes Yes Yes Yes Yes test5 Yes Yes Yes Yes Yes No Sol No Sol Yes test6 No Sol Yes Yes No Sol Yes test7 Yes No Sol No Sol Yes Yes Yes Yes Yes test8 No Sol No Sol test9 test10 No Sol No Sol Yes No Sol Yes Yes Yes test11 No Sol No Sol Yes No Sol Yes No Sol Yes test12 No Sol No Sol Yes No Sol Yes No Sol Yes Yes Yes Yes Yes No Sol Yes Yes Yes Yes Yes Yes Yes Yes No Sol No Sol Yes Yes Yes Only team C, team E and team G have output for all testcases 5-layer metal hidden benchmarks clearly challenging 21
Result Overview Non-deterministic penalty Only one team (Team G) has non-deterministic result, receives this penalty Team A B C D E F G test1 No No No No No No Yes test2 No No No No No No Yes test3 No No No No No No Yes test4 No No No No No No Yes test5 No No No No No No Yes test6 No No No No No No Yes test7 No No No No No No Yes test8 No No No No No No Yes test9 No No No No No No Yes test10 No No No No No No Yes test11 No No No No No No Yes test12 No No No No No No Yes 22
Result Overview: Normalized Raw Scores of Top 5 Teams Normalized Raw Scores (lower is better) 12.00 9.88 10.00 8.00 6.00 4.77 4.11 3.74 3.71 4.00 2.52 2.43 2.10 1.63 1.42 1.11 1.17 1.14 1.07 1.10 1.10 1.05 1.10 1.04 1.04 1.04 1.03 1.08 1.03 1.08 1.07 1.07 1.01 1.06 1.06 1.00 1.00 1.05 1.03 1.03 2.00 1.02 1.01 1.01 1.01 1.01 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 B C D E G 23
Result Overview: Normalized Runtimes of Top 5 Teams Normalized Runtime 16.00 13.77 14.00 12.00 10.00 7.99 8.00 5.68 5.52 5.04 6.00 4.04 3.80 3.73 3.29 3.26 3.04 2.78 2.78 2.76 4.00 2.70 2.69 2.42 2.30 2.13 1.95 1.93 1.58 1.46 1.42 1.41 1.33 1.32 1.30 1.29 1.26 1.21 1.15 1.08 1.08 1.07 1.07 1.05 1.05 1.04 1.01 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 2.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 B C D E G 24
Result Overview: Normalized Scaled Scores of Top 5 Teams Scores scaled by runtime factor and non-deterministic penalty Normalized Scaled Scores (lower is better) 12.00 10.54 10.00 8.00 5.07 6.00 4.22 4.00 3.74 4.00 2.64 2.57 2.22 1.71 1.53 1.22 1.17 1.16 1.16 1.14 1.13 1.12 1.12 1.11 1.10 1.10 1.10 1.08 1.08 1.07 1.05 1.05 1.04 1.04 1.04 1.03 1.03 1.03 1.02 2.00 1.02 1.01 1.01 1.01 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 B C D E G 25
Result Overview Final raw ranking Team A B C D E F G test1 test2 test3 test4 test5 test6 test7 test8 test9 test10 test11 test12 7 1 2 3 4 5 6 11 1 2 4 3 5 6 11 3 1 4 2 11 5 6 3 1 4 2 7 5 6 3 1 5 2 11 3 1 11 2 11 4 4 11 11 2 3 1 11 4 11 3 1 4 2 11 5 11 11 1 11 2 4 3 11 11 1 11 2 11 3 11 11 1 11 2 11 3 11 2 3 1 6 5 11 11 Final ranking Team Final Ranking C E G 1 2 3 26
Outline Background Problem Introduction Benchmark Suite Characteristics Evaluation Contest Results Acknowledgments 27
Acknowledgments We thank Dr. Wen-Hao Liu for providing valuable feedback on benchmarks We thank the authors of Dr. CU for providing evaluation DR binary 28
THANK YOU! 29