Scaling Multi-Core Network Processors Without the Reordering Bottleneck

Slide Note
Embed
Share

This study discusses the challenges in packet ordering within parallel network processors and proposes solutions to reduce reordering delay. Various approaches such as static mapping, single SN approach, and per-flow sequencing are explored to optimize processing efficiency in multi-core NP architectures.


Uploaded on Sep 15, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. IEEE HPSR 2014 Scaling Multi-Core Network Processors Without the Reordering Bottleneck Alex Shpiner (Technion / Mellanox) Isaac Keslassy (Technion) Rami Cohen (IBM Research)

  2. Network Processors (NPs) NPs used in routers for almost everything Forwarding Classification Deep Packet Inspection (DPI) Firewalling Traffic engineering VPN encryption LZS decompression Advanced QoS Increasingly heterogeneous processing demands. 2

  3. Parallel Multi-Core NP Architecture Each packet is assigned to a Processing Element (PE) Any per-packet load balancing scheme PE1 PE2 PEN E.g., Cavium CN68XX NP, EZChip NP-4 3

  4. Packet Ordering in NP NPs are required to avoid out-of-order packet transmission within a flow. TCP throughput, cross-packet DPI, statistics, etc. Na ve solution is avoiding reordering at all. Heavy packets often delay light packets. Stop! PE1 PE2 2 1 PEN Can we reduce this reordering delay? 4

  5. The Problem Reducing reordering delay in parallel network processors 5

  6. Multi-core Processing Alternatives Static (hashed) mapping of flows to processing elements (PEs) [Cao et al., 2000], [Shi et al., 2005] Potential to insufficient utilization of the PEs. Feedback-based adaptation of static mapping [Kencl et al., 2002], [He et al., 2010], [We et al., 2011] Causes packet reordering. Pipeline without parallelism [Weng et al., 2004] Not scalable, due to heterogeneous requirements and commands granularity. 6

  7. Single SN (Sequence Number) Approach Sequence Number (SN) Generator PE1 PE1 PE2 PE2 Ordering Unit 2 1 PEN PEN [Wu et al., 2005], [Govind et al., 2007] SN (sequence number) generator. Ordering unit - transmits only the oldest packet. Large reordering delay. 7

  8. Per-flow Sequencing (Ideal) Actually, we need to preserve order only within a flow. SN SN SN SN Generator Flow 1000000 Generator Flow 47 Generator Flow 1 Generator Flow 13 PE1 PE2 13:1 47:1 Ordering Unit PEN [Khotimsky et al., 2002], [Wu et al., 2005], [Shi et al., 2007], [Cheng et al., 2008] SN (sequence number) generator for each flow. Ideal approach: minimal reordering delay. Not scalable to a large number of flows [Meitinger et al., 2008] 8

  9. Hashed SN (Sequence Number) Approach SN SN SN Generator 1 Generator i Generator K PE1 Hashing PE2 1:1 1:2 7:1 Ordering Unit PEN [M. Meitinger et al., 2008] Note: the flow is hashed to an SN generator, not to a PE Multiple SN (sequence number) generators (ordering domains). Hash flows (5-tuple) to a SN generator. Yet, reordering delay of flows in same hash bucket. 9

  10. Our Proposal Leverage estimation of packet processing delay. Instead of arbitrary ordering domains created by a hash function, create ordering domains of packets with similar processing delay requirements. Heavy-processing packet does not delay light-processing packet in the ordering unit. Assumption: All packets within a given flow have similar processing requirements. Reminder: required to preserve order only within the flow. 10

  11. Processing Phases Processing phase #1 Processing phase #2 Disclaimer: it is not a real packet processing code Processing phase #3 Processing phase #4 Processing phase #5 E.g.: IP Forwarding = 1 phase Encryption = 10 phases 11

  12. RP3(Reordering Per Processing Phase) Algorithm SN Generator K Generator i Generator 1 SN SN PE1 Processing Estimator PE2 1:1 7:2 7:1 Ordering Unit PEN All the packets in the ordering domain have the same number of processing phases (up to K). Lower similarity of processing delay affects the performance (reordering delay), but not the order! 12

  13. Knowledge Frameworks At what stage the packet processing requirements are known: Known upon packet arrival. Known only at the processing start. Known only at the processing completion. 1. 2. 3. PE1 PE2 1 PEN 13

  14. RP3Algorithm for Framework 3 Assumption: the packet processing requirements are known only when the processing completed. Example: Packet that finished all its processing after 1 processing phase is not delayed by another currently processed packet in the 2nd phase. Because it means that they are from different flows Time Aout A, =2 Phase no.1 Phase no.2 Number of phases B, =1 Bout Phase no.1 Order of arrival Theorem: Ideal partition into phases would minimize the reordering delay to 0.14

  15. RP3Algorithm for Framework 3 But, in reality: Time Phase no. 2 Phase no. 1 A, =2 Aout Phase no. 1 Bout B, =1 Order of arrival 15

  16. RP3Algorithm for Framework 3 Each packet needs to go through several SN generators. After completing the -th processing phase it will ask for the next SN from the ( +1)-th SN generator. tC,1 tA,1 Time Next SN Generator A, =2 Aout SN=1:1 SN= 2:1 Bout B, =1 SN= 1:2 Order of arrival 16

  17. RP3Algorithm for Framework 3 When a packet requests a new SN, it cannot always get it automatically immediately. The -th SN generator grants new SN to the oldest packet that finished processing of phases. tC,1 tA,1 Time A, =2 Aout SN=1:1 SN= 2:1 Bout B, =1 SN= 1:2 C, =2 SN=1:3 Cout SN= 2:2 Granted next SN Request next SN Order of arrival There is no processing preemption! 17

  18. RP3 Framework 3 (4) PE: When finish processing phases, send to OU (1) A packet arrives and is assigned an SN1 (5) OU: complete the SN grants (2) At end of processing phase send request for SN +1. When granted increment SN. (6) OU: When all SNs are granted transmit to the output (3) SN Generator : Grant token when SN==oldestSN Increment oldestSN , NextSN 18

  19. Simulations Reordering Delay vs. Processing Variability Synthetic traffic Poisson arrivals Uniform processing requirements distribution between [1,10] phases. For a fair comparison, 10 hash buckets in Hashed-SN algorithm. Zipf distribution of the packets between 300 flows. Phase processing delay variability: Delay ~ U[min, max]. Variability = max/min. E[delay]=100 time units Mean reordering delay Improvement also with high phase Improvement in orders of magnitude processing delay variability Improvement by an order of magnitude of magnitude Improvement by an order Ideal conditions: no reordering delay. Phase processing delay variability

  20. Simulations Reordering Delay vs. Load Real-life trace: CAIDA anonymized Internet traces Mean reordering delay Improvement by orders of magnitude magnitude Improvement by orders of % Load Note: reordering delay occurs even under low load. 20

  21. Summary Novel reordering algorithms for parallel multi-core network processors reduce reordering delays Rely on the fact that all packets of a given flow have similar required processing functions. Three frameworks that define the stages at which the network processor knows about the packet processing requirements. Analysis using simulations Reordering delays are negligible, both under synthetic traffic and real- life traces. Analytical model (in the paper) 21

  22. Thank you.

Related


More Related Content