Software Switches in Computer Networks

 
 
Lecture 10, Computer Networks (198:552)
Fall 2019
 
Software Switches
 
Software-Defined Network (SDN)
 
D
a
t
a
 
p
l
a
n
e
 
C
e
n
t
r
a
l
i
z
e
d
 
c
o
n
t
r
o
l
 
p
l
a
n
e
 
Why software switches?
 
Early roots in networking: first switches were fully in software
Until high link speeds forced everyone to make ASICs
 
As a tool for experimentation with new protocols (eg: Openflow)
 
Advent of virtualization
Need flexible 
policies
 (ie: flow rules) inside endpoints!
What policies?
 
Policies in virtualized switches
 
Tenant policies
Network virtualization: 
I want the physical network to look like my
own, and nobody else is on it
 
Provider policies
Traffic must follow the 
ACLs
 and paths set by the provider
 
Topology “traversal”
Use the core of the DCN as a 
mesh of point to point tunnels
Where should policies be implemented?
Hypervisor (OR) orchestrator
Virtual machine
Container
Lambda
App
App
App
App
 
OpenVSwitch: Requirements
 
Support large and complex policies
 
Support 
updates 
in such policies
Q: why?
 
Allow the hypervisor/orchestrator to support user workloads
Don’t take up too much resources
 
Process packets with high performance
Provide high throughput and low delay
 
OVS design
First design: put OF tables in the kernel
 
Large policies:
 Low performance with 100+ lookups per packet
Merging policies is problematic: 
cross-product explosion
Complex logic in kernel: rules with 
wildcards
 require complex algos
 
Idea 1: Microflow cache
 
Microflow: complete set of OF packet headers and metadata
Example: srcIP, dstIP, IP TTL, srcMAC, dstMAC
Use 
tuple space search 
to do 
one lookup per packet
 
Microflow cache
in the kernel
 
Openflow table
in user space
 
Hit
 
Miss
 
Use a large
hash table
 
Problems with micro-flows
 
Too many micro-flows: e.g., each TCP port
Many micro-flows may be short lived!
Poor cache-hit rate
 
Can we cache the outcome of rule lookup directly?
 
N
aive approach: Cross-product explosion!
Example: Table 1 on source IP, table 2 on destination IP
 
Recurring theme in this paper: 
avoid up-front (proactive) costs
 
Idea 2: Mega-flow cache
 
Build the cache of rules 
lazily
 using just the 
fields accessed
Ex: contain just src/dst IP combinations that appeared in packets
 
 
Megaflow cache
in the kernel
 
Openflow table
in user space
 
Hit
 
Miss
 
Use tuple
space search
 
Improvements to mega-flow caching
 
You have an OF table. What happens if you populate the mega-
flow blindly by concatenating the fields accessed on lookup?
Hint 1: consider flow tables that match on highly variable fields
Hint 2: consider priorities of rules with overlapping matches
Hint 3: consider the number of lookups vs. microflow cache
 
OVS introduced a series of 
algorithmic 
improvements
Priority sorting of mega-flow tables
Staged lookups starting with more static sets of fields
Prefix trie to detect non-overlapping longest-prefix matches
Combine with micro-flow cache!
 
Discussion of OVS
 
What you said
 
FAQs
 
#1 How do megaflows handle wildcards in OpenFlow rules?
 
FAQs
 
#2.1 What are the actual sizes of the microflow and megaflow
caches?
 
#2.2 Do the caches overlap with the L1, L2, … caches of the
machine?
 
FAQs
 
#3.1 What exactly is the overhead of passing a packet
between the kernel and the user space process?
 
#3.2 (How) Do the benefits of OVS caching carry over to user
space networking?
 
FAQs
 
#4.1 With OVS, are we limited to Openflow? Can we use more
flexible protocols?
 
#4.2 How flexible and perf-critical is OVSDB?
 
Missing details in perf evaluation
 
if the paper gave more information on how often in their benchmark
system the real locations of virtual switches or their hypervisors were
moved. This is because moving very often would inevitably incur
overheads due to memory copying and cache misses.
 
Michael Wu
 
Missing details in perf evaluation
 
The author just experimented the effect for each optimization only
and the whole combination. They did not take the combination effect of
two optimization or three optimization into consideration. I think some
combination of two optimizations may achieve better performance and
throughput.
 
Weizhi Wang
 
Missing details in perf evaluation
 
Is there any downside on moving the IP tunneled packets
transportation virtual? Namely, will the possibility of losing packets
increase? [...] Wouldn't connecting to virtual datacenters cost more time,
or can it be overlooked?
 
Xiaochang Chen
 
Deeper design issues
 
I feel it possible for one VM to use some techniques of reverse
engineering like measuring the time of connection setup to sniff the
connections of other vms, similar to Phantom and Sphere's hack to CPU
caching. Though I am not quite sure, since Open Vswitch is fully open-
sourced, its hash function, seeds are open to all people, it's totally
possible to monitor the activities of other VMs in one VM.
 
Siwei Feng
 
Improving performance
 
Another improvement that could be made would be specifying
this idea for specific data centers and hardware. While OVS is made to
serve data centers in general, research should be done to see how
caches should be implemented on a hardware level to best serve the
processes outlined in this paper.
 
Richard Fernandes
 
Is it possible to offload the flow table search and flow computation
to some special hardware, e.g., FPGA?
 
Qiongwen Xu
 
Improving performance
 
staged lookup [...] statically divides field into four groups. A
better choice is to refer to importance of different headers in different
scenes adaptively.
 
Huixiong Qin
Slide Note

Slide material heavily adapted courtesy of Albert Greenberg, Changhoon Kim, Mohammad Alizadeh.

Embed
Share

Delve into the world of software switches, exploring their evolution from early roots in networking to their role in enabling flexible policies and virtualization. Discover the importance of implementing policies in virtualized switches and the requirements of OpenVSwitch, shedding light on its design and innovative solutions. Gain insights into the challenges of large policies and the concept of Microflow cache in optimizing packet processing within software switches.

  • Software Switches
  • Computer Networks
  • Virtualization
  • OpenVSwitch
  • Policies

Uploaded on Oct 02, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Software Switches Lecture 10, Computer Networks (198:552) Fall 2019

  2. Software-Defined Network (SDN) Centralized control plane Data plane Data plane Data plane Data plane

  3. Why software switches? Early roots in networking: first switches were fully in software Until high link speeds forced everyone to make ASICs As a tool for experimentation with new protocols (eg: Openflow) Advent of virtualization Need flexible policies (ie: flow rules) inside endpoints! What policies?

  4. Policies in virtualized switches Tenant policies Network virtualization: I want the physical network to look like my own, and nobody else is on it Provider policies Traffic must follow the ACLs and paths set by the provider Topology traversal Use the core of the DCN as a mesh of point to point tunnels

  5. Where should policies be implemented? App App App App Virtual machine Container Lambda Hypervisor (OR) orchestrator

  6. OpenVSwitch: Requirements Support large and complex policies Support updates in such policies Q: why? Allow the hypervisor/orchestrator to support user workloads Don t take up too much resources Process packets with high performance Provide high throughput and low delay

  7. OVS design

  8. First design: put OF tables in the kernel Large policies: Low performance with 100+ lookups per packet Merging policies is problematic: cross-product explosion Complex logic in kernel: rules with wildcards require complex algos

  9. Idea 1: Microflow cache Microflow: complete set of OF packet headers and metadata Example: srcIP, dstIP, IP TTL, srcMAC, dstMAC Use tuple space search to do one lookup per packet Hit Microflow cache in the kernel Use a large hash table Openflow table in user space Miss

  10. Problems with micro-flows Too many micro-flows: e.g., each TCP port Many micro-flows may be short lived! Poor cache-hit rate Can we cache the outcome of rule lookup directly? Naive approach: Cross-product explosion! Example: Table 1 on source IP, table 2 on destination IP Recurring theme in this paper: avoid up-front (proactive) costs

  11. Idea 2: Mega-flow cache Build the cache of rules lazily using just the fields accessed Ex: contain just src/dst IP combinations that appeared in packets Hit Megaflow cache in the kernel Use tuple space search Miss Openflow table in user space

  12. Improvements to mega-flow caching You have an OF table. What happens if you populate the mega- flow blindly by concatenating the fields accessed on lookup? Hint 1: consider flow tables that match on highly variable fields Hint 2: consider priorities of rules with overlapping matches Hint 3: consider the number of lookups vs. microflow cache OVS introduced a series of algorithmic improvements Priority sorting of mega-flow tables Staged lookups starting with more static sets of fields Prefix trie to detect non-overlapping longest-prefix matches Combine with micro-flow cache!

  13. Discussion of OVS

  14. What you said

  15. FAQs #1 How do megaflows handle wildcards in OpenFlow rules?

  16. FAQs #2.1 What are the actual sizes of the microflow and megaflow caches? #2.2 Do the caches overlap with the L1, L2, caches of the machine?

  17. FAQs #3.1 What exactly is the overhead of passing a packet between the kernel and the user space process? #3.2 (How) Do the benefits of OVS caching carry over to user space networking?

  18. FAQs #4.1 With OVS, are we limited to Openflow? Can we use more flexible protocols? #4.2 How flexible and perf-critical is OVSDB?

  19. Missing details in perf evaluation if the paper gave more information on how often in their benchmark system the real locations of virtual switches or their hypervisors were moved. This is because moving very often would inevitably incur overheads due to memory copying and cache misses. Michael Wu

  20. Missing details in perf evaluation The author just experimented the effect for each optimization only and the whole combination. They did not take the combination effect of two optimization or three optimization into consideration. I think some combination of two optimizations may achieve better performance and throughput. Weizhi Wang

  21. Missing details in perf evaluation Is there any downside on moving the IP tunneled packets transportation virtual? Namely, will the possibility of losing packets increase? [...] Wouldn't connecting to virtual datacenters cost more time, or can it be overlooked? Xiaochang Chen

  22. Deeper design issues I feel it possible for one VM to use some techniques of reverse engineering like measuring the time of connection setup to sniff the connections of other vms, similar to Phantom and Sphere's hack to CPU caching. Though I am not quite sure, since Open Vswitch is fully open- sourced, its hash function, seeds are open to all people, it's totally possible to monitor the activities of other VMs in one VM. Siwei Feng

  23. Improving performance Another improvement that could be made would be specifying this idea for specific data centers and hardware. While OVS is made to serve data centers in general, research should be done to see how caches should be implemented on a hardware level to best serve the processes outlined in this paper. Richard Fernandes Is it possible to offload the flow table search and flow computation to some special hardware, e.g., FPGA? Qiongwen Xu

  24. Improving performance staged lookup [...] statically divides field into four groups. A better choice is to refer to importance of different headers in different scenes adaptively. Huixiong Qin

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#