Understanding Modular Layer 2 in OpenStack Neutron
Modular Layer 2 (ML2) is a new core plugin in OpenStack Neutron that enables interface with various network mechanisms and types for enhanced flexibility and efficiency. It replaces deprecated plugins like Open vSwitch and Linuxbridge, offering a more modular and feature-rich approach for managing layer 2 networking in complex data centers.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
Modular Layer 2 In OpenStack Neutron Robert Kukura, Red Hat Kyle Mestery, Cisco
1. Ive heard the Open vSwitch and Linuxbridge Neutron Plugins are being deprecated. 2. I ve heard ML2 does some cool stuff! 3. I don t know what ML2 is but want to learn about it and what it provides.
What is Modular Layer 2? A new Neutron core plugin in Havana Modular o Drivers for layer 2 network types and mechanisms - interface with agents, hardware, controllers, ... o Service plugins and their drivers for layer 3+ Works with existing L2 agents o openvswitch o linuxbridge o hyperv Deprecates existing monolithic plugins o openvswitch o linuxbridge
Motivations For a Modular Layer 2 Plugin
Before Modular Layer 2 ... Neutron Server Neutron Server OR OR ... Open vSwitch Plugin Linuxbridge Plugin
Before Modular Layer 2 ... I want to write a Neutron Plugin. What a pain. :( Neutron Server But I have to duplicate a lot of DB, segmentation, etc. work. Vendor X Plugin
ML2 Use Cases Replace existing monolithic plugins o Eliminate redundant code o Reduce development & maintenance effort New features o Top-of-Rack switch control o Avoid tunnel flooding via L2 population o Many more to come... Heterogeneous deployments o Specialized hypervisor nodes with distinct network mechanisms o Integrate *aaS appliances o Roll new technologies into existing deployments
The Modular Layer 2 (ML2) Plugin is a framework allowing OpenStack Neutron to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers.
Whats Similar? ML2 is functionally a superset of the monolithic openvswitch, linuxbridge, and hyperv plugins: Based on NeutronDBPluginV2 Models networks in terms of provider attributes RPC interface to L2 agents Extension APIs
Whats Different? ML2 introduces several innovations to achieve its goals: Cleanly separates management of network types from the mechanisms for accessing those networks o Makes types and mechanisms pluggable via drivers o Allows multiple mechanism drivers to access same network simultaneously o Optional features packaged as mechanism drivers Supports multi-segment networks Flexible port binding L3 router extension integrated as a service plugin
ML2 Architecture Diagram Neutron Server ML2 Plugin API Extensions Type Manager Mechanism Manager Cisco Nexus Linuxbridge Tail-F NCS TypeDriver TypeDriver TypeDriver Population Hyper-V vSwitch VXLAN VLAN Arista Open GRE L2
Multi-Segment Networks VXLAN 123567 physnet1 VLAN 37 physnet2 VLAN 413 VM 1 VM 2 VM 3 Created via multi-provider API extension Segments bridged administratively (for now) Ports associated with network, not specific segment Ports bound automatically to segment with connectivity
Type Driver API class TypeDriver(object): @abstractmethod def get_type(self): pass @abstractmethod def initialize(self): pass @abstractmethod def validate_provider_segment(self, segment): pass @abstractmethod def reserve_provider_segment(self, session, segment): pass @abstractmethod def allocate_tenant_segment(self, session): pass @abstractmethod def release_segment(self, session, segment): pass
Mechanism Driver API def create_port_precommit(self, context): pass class MechanismDriver(object): @abstractmethod def initialize(self): pass def create_port_postcommit(self, context): pass def create_network_precommit(self, context): pass def update_port_precommit(self, context): pass def create_network_postcommit(self, context): pass def update_port_postcommit(self, context): pass def update_network_precommit(self, context): pass def delete_port_precommit(self, context): pass def update_network_postcommit(self, context): pass def delete_port_postcommit(self, context): pass def delete_network_precommit(self, context): pass def bind_port(self, context): pass def delete_network_postcommit(self, context): pass def validate_port_binding(self, context): return False def create_subnet_precommit(self, context): pass def unbind_port(self, context): pass class NetworkContext(object): @abstractproperty def current(self): pass def create_subnet_postcommit(self, context): pass def update_subnet_precommit(self, context): pass @abstractproperty def original(self): pass def update_subnet_postcommit(self, context): pass @abstractproperty def network_segments(self): pass def delete_subnet_precommit(self, context): pass def delete_subnet_postcommit(self, context): pass
Port Binding class PortContext(object): @abstractproperty def current(self): pass Determines values for port s binding:vif_type and binding:capabilities attributes and selects segment Occurs when binding:host_id set on port or existing valid binding @abstractproperty def original(self): pass ML2 plugin calls bind_port() on registered MechanismDrivers, in order listed in config, until one succeeds or all have been tried @abstractproperty def network(self): pass Driver determines if it can bind based on: context.network.network_segments context.current[ binding:host_id ] context.host_agents() o o o @abstractproperty def bound_segment(self): pass @abstractmethod def host_agents(self, agent_type): pass For L2 agent drivers, binding requires live L2 agent on port s host that: Supports the network_type of a segment of the port s network o @abstractmethod def set_binding(self, segment_id, Has a mapping for that segment s physical_network if applicable o vif_type, cap_port_filter): pass If it can bind the port, driver calls context.set_binding() with binding details If no driver succeeds, port s binding:vif_type set to BINDING_FAILED
Type Drivers in Havana The following are supported segmentation types in ML2 for the Havana release: local flat VLAN GRE VXLAN
Mechanism Drivers in Havana The following ML2 MechanismDrivers exist in Havana: Arista Cisco Nexus Hyper-V Agent L2 Population Linuxbridge Agent Open vSwitch Agent Tail-f NCS
Before ML2 L2 Population MechanismDriver VM A wants to talk to VM G. VM A sends a broadcast packet, which is replicated to the entire tunnel mesh. VM A VM B Host 1 VM I VM C Host 1 Host 2 VM H Host 4 Host 3 VM G VM F VM E VM D
With ML2 L2 Population MechanismDriver The ARP request from VM A for VM G is intercepted and answered using a pre-populated neighbor entry. Traffic from VM A to VM G is encapsulated and sent to Host 4 according to the bridge forwarding table entry. VM A VM B Host 1 Proxy Arp VM I VM C Host 2 Host 1 VM H Host 4 Host 3 VM G VM F VM E VM D
ML2 Futures: Deprecation Items The future of the Open vSwitch and Linuxbridge plugins o These are planned for deprecation in Icehouse o ML2 supports all their functionality o ML2 works with the existing OVS and Linuxbrige agents o No new features being added in Icehouse to OVS and Linuxbridge plugins Migration Tool being developed
Plugin vs. ML2 MechanismDriver? Advantages of writing an ML2 Driver instead of a new monolithic plugin o Much less code to write (or clone) and maintain o New neutron features supported as they are added o Support for heterogeneous deployments Vendors integrating new plugins should consider an ML2 Driver instead o Existing plugins may want to migrate to ML2 as well
ML2 With Current Agents Existing ML2 Plugin works with existing agents Separate agents for Linuxbridge, Open vSwitch, and Hyper-V Neutron Server ML2 Plugin API Network Host A Host B Host C Host D Linuxbridge Agent Hyper-V Agent Open vSwitch Agent Open vSwitch Agent
ML2 With Modular L2 Agent Future direction is to combine Open Source Agents Have a single agent which can support Linuxbridge and Open vSwitch Pluggable drivers for additional vSwitches, Infiniband, SR-IOV, ... Neutron Server ML2 Plugin API Network Host A Host B Host C Host D Modular Agent Modular Agent Modular Agent Modular Agent
What the Demo Will Show ML2 running with multiple MechanismDrivers openvswitch cisco_nexus Booting multiple VMs on multiple compute hosts Hosts are running Fedora Configuration of VLANs across both virtual and physical infrastructure
ML2 Demo Setup Host 1 Host 2 VLAN is added on VLAN is added on the VIF for VM2 and also on the br-eth2 ports by the ML2 OVS nova compute the VIF for VM1 and also on the br-eth2 ports by the ML2 OVS MechanismDriver. nova api ... neutron ovs agent MechanismDriver. neutron server neutron ovs agent nova compute neutron dhcp neutron l3 agent vm1 vm2 VM1 can ping VM2 we ve successfully completed the standard network test. br-int br-int br-eth2 br-eth2 eth2 eth2 The ML2 Cisco Nexus MechanismDriver trunks the VLAN on eth2/1. The ML2 Cisco Nexus MechanismDriver trunks the VLAN on eth2/2. eth2/1 eth2/2 Cisco Nexus Switch