(T) Providing higher bandwidth applications (more) at a lower cost (with less) is the main driver for the present evolution of Ethernet. But to achieve that goal some legacy limitations of enterprise Ethernet have been addressed to make Ethernet a carrier transport technology (see our previous Blog article: The Versatility of Ethernet). Ethernet can be both a service and a transport technology. It can enable Layer 2, Layer 3 and managed services. And, it has a lower cost per bit compared to other carrier access technology.
The Metro Ethernet Forum (MEF) considers five attributes that define carrier Ethernet: standard services, scalability, resiliency, QoS, and management. Ethernet standard services can be point-to-point (E-Line), LAN (E-LAN) or multicast (E-Tree). They can be port-based (EPL, EP-LAN, EP-Tree) or VLAN-based (EVPL, EVP-LAN, EVP-Tree).
Ethernet-based access can be supplied over copper or fiber physical Ethernet and over a large range of legacy access infrastructure in particular DS-1/E-1, DS-3, SONET/SDH, (G,B) PON and wireless.
The IEEE has completed or is working on a number of Ethernet protocol extensions such as Provider Bridges (PB), Provider Backbone Bridges (PBB) for Ethernet transport and Ethernet OAM for management. The IETF and IP/MPLS Forum are working on Ethernet transport interworking with IP/MPLS and other transport technologies. While the MEF is working on the specifications of the Ethernet interface and the Ethernet service.
Following are some new pieces of the carrier Ethernet puzzle.
Provider Bridges (PB) or IEEE 802.1 ad
802.1 ad (also called Q-in-Q or 802.1Q-in-802.1Q) introduces for the service provider a service VLAN tag (S-Tag) that is added to the customer VLAN/802.1Q tag (C-Tag) as the customer Ethernet frames ingress into the service provider network. The goal of 802.1 ad is to offer a transparent Ethernet service to the customer. To that end, customer VLANs are forwarded over the service provider network using the S-Tag that identifies the customer service instance. The service provider edge switch (PE) must learn the customer MAC addresses. S-Tag like VLANs have 12-bits and so limited to 4,094 service instances.
Provider Backbone Bridges (PBB) or IEEE 802.1 ah
802.1 ah (also called MAC-in-MAC) addresses the scalability limitations of 802.1 ad for backbone networks. Customer frames (802.1 ad) are tunneled into the service provider frames (802.1 ah). The tag of the service provider frames is called B-Tag and this new MAC header is called backbone MAC (B-MAC). To increase the scale of the backbone network, the 802.1 ah frames includes a 24-bits service instance identifier T-tag (so instead of being limited to 4,094 service instances now 16 million service instances are possible!). The service provider backbone switches (BC) forward the Ethernet frames based on the backbone MAC while the backbone edge switches (BE) need to learn only the MAC addresses of the CPEs that it support.
Deploying VPLS with a PBB Access Network
Since service providers have been invested in an IP/MPLS core network, the emergence of new Ethernet transport networks will succeed if those networks can be integrated to present IP/MPLS networks. One winning deployment integration for PBB is to enhance the value of large-scale MPLS Layer 2 VPNs or VPLS (see our previous Blog article: MPLS PWS VPLS and IP VPNs Services). For a large number of CEs, VPLS can be scaled through H-VPLS where two bridging domains are created by dividing the PEs in N-PEs (providing the LSP trunks for the VPLS) and the U-PEs (connecting to the CEs).
CEs can be linked to the U-PEs through PB/802.1 ad and U-PEs to N-PEs through a PBB/802.1 ah network. The customer MACs can be mapped to the backbone MACs and MAC-in-MAC encapsulation can be provided at the N-PE. This enables the N-PEs to switch according to the B-MAC and as a consequence decreases significantly the number of full-mesh pseudowires required at the core.
Ethernet OAM has two main functions for the Ethernet data plane, fault management defined by IEEE 802.1 ag and performance management defined by ITU-T Y.1731. It leverages the same frame format and forwarding mechanisms than the data path. Fault management includes continuity check message (CCM), loopback (LB) and link trace (LT). CCM detects a link failure while LB is similar to IP ping and LT to IP traceroute. Performance monitoring includes measurements for loss, delay, and throughput of the Ethernet frames.
Adding a Control Plane to Ethernet or PBB-Traffic Engineering PBB-TE or IEEE 802.1 Qay – Provider Link State Bridging PLSB or IEEE 802,1 aq
Also still at an early phase as a proposal, PBB-TE aims to provide traffic engineered and resilient paths by adding a GMPLS or GEL control plane (see our previous Blog article: Replacing the Spanning Tree with GELS) to PBB. PLSB aims to provide the shortest path bridging by adding another control plane which provides routing functions based on IS-IS.
Adding a control plane to Ethernet is the most controversial piece of the carrier Ethernet puzzle between equipment vendors in standard organizations and industry forums. The interest of the Ethernet switch vendors is to increase the level of functionality in their switches. While the IP/MPLS routing vendors want to keep MPLS as a control plane for IP and as a transport technology for Ethernet, IP and many legacy protocols.
It goes without saying that the market will decide which of those present extensions to Ethernet to make it “Carrier Class” will solve some real problems and when it will be deployed successfully.
Note: I would like to thank Marc Lasserre from Alcatel-Lucent, Andy Malis from Verizon Communications and the IP/MPLS Forum and Luyuan Fang from Cisco Systems, for their inspiring materials and for having shared with me their present research on Carrier Ethernet and MPLS.
Note: The picture above is a painting from Claude Monet “Bennecourt”.
Copyright © 2005-2008 by Serge-Paul Carrasco. All rights reserved.
Contact Us: asvinsider at gmail dot com.