Fast FRR Container Configuration
After creating the infrastructure that generates the device configuration files within netlab (not in an Ansible playbook), it was time to try to apply it to something else, not just Linux containers. FRR containers were the obvious next target.
netlab uses two different mechanisms to configure FRR containers:
- Data-plane features are configured with bash scripts using ip commands and friends.
- Control-plane features are configured with FRR’s vtysh
I wanted to replace both with Linux scripts that could be started with the docker exec command.
Opinion: Impact of AI on Networking Engineers
A friend of mine sent me a series of questions that might also be on your mind (unless you’re lucky enough to live under a rock or on a different planet):
I wanted to ask you how you think AI will affect networking jobs. What’s real and what’s hype?
Before going into the details, let’s make a few things clear:
Worth Reading: A Tech Career in 2026
There’s no “networking in 20xx” video this year, so this insightful article by Anil Dash will have to do ;) He seems to be based in Silicon Valley, so keep in mind the Three IT Geographies, but one cannot beat advice like this:
So much opportunity, inspiration, creativity, and possibility lies in applying the skills and experience that you may have from technological disciplines in other realms and industries that are often far less advanced in their deployment of technologies.
Lab: VXLAN Bridging with EVPN Control Plane
In the previous VXLAN labs, we covered the basics of Ethernet bridging over VXLAN and a more complex scenario with multiple VLANs.
Now let’s add the EVPN control plane into the mix. The data plane (VLANs mapped into VXLAN-over-IPv4) will remain unchanged, but we’ll use EVPN (a BGP address family) to build the ingress replication lists and MAC-to-VTEP mappings.
Deploy Partially-Configured Training Labs with netlab
Imagine you want to use netlab to build training labs, like the free BGP labs I created. Sometimes, you want to give students a device to work on while the other lab devices are already configured, just waiting for the students to get their job done.
For example, in the initial BGP lab, I didn’t want any BGP-related configuration on RTR while X1 would already be fully configured – when the student configures BGP on RTR, everything just works.
MUST WATCH: BGP: the First 18 Years
If you’re at all interested in the history of networking, you simply MUST watch the BGP at 18: Lessons In Protocol Design lecture by Dr. Yakov Rekhter recorded in 2007 (as you can probably guess from the awful video quality) (HT: Berislav Todorovic via LinkedIn).
Why Doesn't netlab Use X for Device Configuration Templates?
Petr Ankudinov made an interesting remark when I complained about how much time I wasted waiting for Cisco 8000v to boot when developing netlab device configuration templates:
For Arista part - just use AVD with all templates included and ANTA for testing. I was always wondering why netlab is not doing that.
Like any other decent network automation platform, netlab uses a high-level data model (lab topology) to describe the network. That data model is then transformed into a device-level data model, and the device-level data structures are used to generate device configurations.
Lab: Distributing Level-2 IS-IS Routes into Level-1 Areas
One of the major differences between OSPF and IS-IS is their handling of inter-area routes. Non-backbone OSPF intra-area routes are copied into the backbone area and later (after the backbone SPF run) copied into other areas. IS-IS does not copy level-2 routes into level-1 areas; level-1 areas (by default) behave like totally stubby OSPF areas with the level-1 routers using the Attached (ATT) bit of level-1-2 routers in the same area to generate the default route.
On MPLS Paths, Tunnels and Interfaces
One of my readers attempted to implement a multi-vendor multicast VPN over MPLS but failed. As a good network engineer, he tried various duct tapes but found that the only working one was a GRE tunnel within a VRF, resulting in considerable frustration. In his own words:
How is a GRE tunnel different compared to an MPLS LSP? I feel like conceptually, they kind of do the same thing. They just tunnel traffic by wrapping it with another header (one being IP/GRE, the other being MPLS).
Instead of going down the “how many angels are dancing on this pin” rabbit hole (also known as “Is MPLS tunneling?”), let’s focus on the fundamental differences between GRE/IPsec/VXLAN tunnels and MPLS paths.
How Moving Away from Ansible Made netlab Faster
TL&DR: Of course, the title is clickbait. While the differences are amazing, you won’t notice them in small topologies or when using bloatware that takes minutes to boot.
Let’s start with the background story: due to the (now fixed) suboptimal behavior of bleeding-edge Ansible releases, I decided to generate the device configuration files within netlab (previously, netlab prepared the device data, and the configuration files were rendered in an Ansible playbook).
As we use bash scripts to configure Linux containers, it makes little sense (once the bash scripts are created) to use an Ansible playbook to execute docker exec script or ip netns container exec script. netlab release 26.01 runs the bash scripts to configure Linux, Bird, and dnsmasq containers directly within the netlab initial process.
Now for the juicy part.