On MPLS Forwarding Performance Myths
Whenever I claim that the initial use case for MPLS was improved forwarding performance (using the RFC that matches the IETF MPLS BoF slides as supporting evidence), someone inevitably comes up with a source claiming something along these lines:
The idea of speeding up the lookup operation on an IP datagram turned out to have little practical impact.
That might be true1, although I do remember how hard it was for Cisco to build the first IP forwarding hardware in the AGS+ CBUS controller. Switching labels would be much faster (or at least cheaper), but the time it takes to do a forwarding table lookup was never the main consideration. It was all about the aggregate forwarding performance of core devices.
Anyhow, Duty Calls. It’s time for another archeology dig. Unfortunately, most of the primary sources irrecoverably went to /dev/null, and personal memories are never reliable; comments are most welcome.
It was the mid-1990s, the Internet was taking off, and a few large US ISPs had a bit of a problem. They had too much traffic to use routers as core network devices; ATM switches were the only alternative.
To qualify that, you have to understand how “fast” routers were in those days:
- AGS+ had a 533 Mbps backplane (Cbus, source)
- Cisco 7000 was just a better implementation of the same concepts, and it looks like the CxBus had the same speed as the original Cbus (based on this source).
- Cisco 7500, launched in 1995, had CyBus with 2.1 Gbps2
- Line cards were laughable (by today’s standards). The fastest linecard had two Fast Ethernet ports (200 Mbps or 400 Mbps of marketing bandwidth).
On the other hand, Cisco’s LightStream 1010 ATM switch (also launched in 1995) had 5 Gbps of bandwidth (source). Keep in mind that although LS1010 was an excellent product, it was a late-to-market me-too entry-level switch. Other ATM switches had even higher performance.
The service providers were thus forced to combine routers at the network edge (because they needed IP forwarding) with ATM switches at the network core (to get sufficient aggregate forwarding bandwidth). There were “just” two problems with that approach:
-
The only (working) way to interconnect routers and ATM switches was to build point-to-point ATM virtual circuits (VC) between routers (using a network management system) and then run routing protocols on the full mesh of ATM VCs. In a word, a spaghetti-mess nightmare.
-
The customers were spending money (that Cisco wanted to get) buying ATM switches from other vendors.
Now, imagine you had a technology that would:
- Allow a seamless integration of all devices in the network
- Have a unified control plane running IP routing protocols
- Use labels in the core (reusing ATM VPI/VCI header) to retain the benefits of high-speed ATM forwarding while mapping IP packets into those labels at the slower network edge
- Be available from a single vendor3
And that’s (according to how this was explained to me when I asked, “Who would ever need this?”) why we got Tag Switching. Read the early Tag Switching information (or even the LDP RFC), and you’ll see how heavily biased toward ATM (cell-mode MPLS) it was4. Most of the TDP/LDP complexity comes from dealing with hardware that:
- Could not do IP lookups
- Could not preallocate labels to every prefix in the network because the VPI/VCI forwarding table was limited
- Could not even merge two data streams into a single one. Streams from two ingress routers had to be kept separate until they reached the egress router5.
Not surprisingly, someone quickly figured out that one could use the same concepts on Frame Relay6 and point-to-point links (merging multiple layer-2 transport technologies into a single label space). When the labels were no longer limited to ATM headers, it wasn’t too hard to think of the label stack7, and then get really creative and use the label stack to implement services on top of the transport label-switched paths.
MPLS/VPN was the first such service (and the first time most people heard of MPLS), and the rest is history. ATM is long gone, cell-mode MPLS died even before that, and we’re still using frame-mode MPLS transport and MPLS/VPN technologies.
-
Although that same source once claimed SDN was a success 🤷♂️. A grain of salt might be advised. ↩︎
-
I have a vague recollection that it had multiple buses, but couldn’t find any supporting information, apart from the fact that Cisco 7505 had 1 Gbps of forwarding bandwidth while the higher-end models (7505/7513) had two. Comments welcome. ↩︎
-
And, even better, only work on gear from that single vendor. Why do you think everyone else was so interested in standardizing something that was notably different from Cisco’s initial implementation? ↩︎
-
My MPLS/VPN book had two chapters on MPLS-over-ATM describing cell-mode MPLS (the “real” ATM MPLS) and the frame-mode MPLS over ATM VCs (router-to-router MPLS over ATM). If you don’t have the book on your dusty bookshelf, I’m sure you’ll find a stolen PDF in a dark corner of the Internet. ↩︎
-
Unless your ATM switch supported the VC Merge feature, which requires heavy buffering in transit switches. Yeah, we’re back to the shallow/deep buffer discussion. Some things never change. ↩︎
-
ChatGPT claims the idea is usually credited to Yakov Rekhter. That sounds about right. ↩︎
LS-1010 was a further developed version of A100. Cisco has acquired a smaller company (LightStream) to get this product.
LS-1010 was very much different from the competitors, since it had SVC support.
Thanks a million!
"CyBus—Cisco Extended Bus. A 1.067-gigabits-per-second (Gbps) data bus for interface processors (two CyBuses are used in the Cisco 7507)"
For the Cisco 7513 "The dual-CyBus backplane has 13 slots: six interface processor slots, 0 through 5 (CyBus 0), five interface processor slots, 8 through 12 (CyBus 1)"
I still remember one of my first tag switching POC shortly after I joined Cisco was with a 7200 with special software release (control plane) connected back to back to an ATM LS switch (data plane)
Cell-mode MPLS was heavily dependent of ATM PNNI to enable the routers a dynamic set-up of switched virtual circuits (SVCs) within the ATM network.
Not sure when exactly PNNI was invented & implemented and who came up with it, but I guess it must have been in the mid-90s...?
PNNI itself is, iirc, in essence nothing more than the ATM-incarnation of CLNS using OSI addressing and IS-IS routing - one of (if not the only one?) the rare cases to meet CLNS in the wild outside of SONET/SDH networks back in late 1990s / early to mid-2000s...
In frame-mode MPLS, ATM SVCs basically became LSPs and LDP (or RSVP) would take care of the "circuit setup".
It's been a few years, but yeah, I'm pretty sure it was Yakov who was behind Tag Switching. I've also seen a lot of people claiming that MPLS didn't provide faster forwarding, because it is all done in hardware. However, longest match required 4 lookups in the m-trie that was a 16-8-8 stride with the adjacency table being the 4th in CEF. A single exact match lookup on a label was much faster. In the end, if the forwarding ASIC is fast enough to run wire rate while still performing the 4 lookups, then it becomes moot.