blog posts

What Is Topology Spine-And-Leaf VXLAN BGP EVPN Fabric And How Does It Work?

What Is Topology Spine-And-Leaf VXLAN BGP EVPN Fabric And How Does It Work?

The Topology Of Traditional Campus Design Has Reached The Limits Of Scalability And Performance Requirements Of Today’s Network Architecture. One Of The Applied Technologies To Solve This Problem Is spine-end-Leaf VXLAN BGP EVPN Fabric Architecture.

Spine-And-Leaf Topology,  The above technology provides a strong backbone network that meets the demand for high density and multi-gigabit traffic needs. The above approach allows simultaneous access to high-performance databases, playback of 4K-resolution media content, and transfer of terabytes files without delay or slowdown if thousands of users are accessed simultaneously.

Column and leaf architecture (a highly unusual name for a technology concept in the IT industry) solves the limitations of scalability and functional demand in academic networks using a simple architectural approach. To have a clear understanding of column and leaf architecture in today’s network design concepts, we need to have basic information about its components. VXLAN BGP EVPN encapsulates layer 2 in layer three frames and, for this purpose, uses the L2VPN EVPN address family ID (AFI) in the BGP protocol.

The spine-and-leaf architecture can run various programs and is not limited to VXLAN. Another use of column and leaf architecture is the Cisco-based application infrastructure COOP (Council of Oracle Protocol) instead of VXLAN to perform endpoint mapping (IP) and location announcement.

The architecture of the spine and leaf

Symmetric architecture is predictable. You can visualize a traffic pattern in a spine-and-leaf architecture. The connection is leaves – spine – leaves.

The traffic flow starts from the source leaf, which directs it towards the spine. Then, the spine leads it towards the destination leaf. Each source endpoint (any device, server, workstation, etc.) is just two jumps away from its destination. The figure below shows this topic.

First jump: Source leaves up the spine

Second jump: Spine up goes destination

Layers of spine and leaves

There are two layers in one topology of the spine and leaves.

The spine layer is the junction of the leaves. Spins (spines) reflect all the routing information to their clients (in this case, to the leaves). The spine layer configures the BGP EVPN to play the reflective role of the path. You designate them as meeting points for the underlying multiplayer traffic in the spine layer. Here, the spine layer is considered a distribution layer or aggregation in a three-layer scheme, but more work is done than the aggregation of layer two components.

The leaf layer provides all endpoints of access to Fabric and makes network routing decisions. All layer leaves are three cores. In the three-layer design, the core layer makes all the routing decisions. The core is usually active hardware in three-tiered formats, and additional centers are set as standby nodes with FHRP (first-hop redundancy protocol). Of course, be careful that this is not true of the leaf in VXLAN BGP EVPN.

One of the most powerful features in VXLAN BGP EVPN is the anycast gateway feature, which allows the leaf layer to act as a sizeable active core switch. Each leaf can drive traffic to its destination. Here, you are not limited to a 3-core active layer. In fabric-based developed local virtual networks, each leaf is an active core that delivers remarkable performance and scalability tailored to the needs of today’s data center-based networks.

Redundancy in spinal and leaf topology

Like all manufacturing environments, redundancy in place is essential. A Fabric component must have at least two reels to comply with redundancy requirements.

There is also this rule in the architecture of the spine and leaves. All leaves are connected to all spins. Here, at least one graft goes from one leaf to the spine. The following figure shows a resonance of the scenario of coping with failure in four-leaf/two-column topology.

Redundancy leaves

Now it is better to get information about redundancy in the leaf layer. The redundancy aspect differs slightly from the spine layer since a leaf connects all the network’s endpoints, access switches, servers, etc.

It is best to talk briefly about vPC on the Cisco Nexus platform to clarify the discussion. The vPC provides the required leaf redundancy by combining two independent leaves in a vPC amplitude. Let’s assume you have a server with a dual NIC connection. Since the leaf layer is where you connect all your end devices, access switches, and servers to the Fabric, redundancy to the end device, in this case, the server, is considered. The above approach is achieved by configuring the end host PC. Redundancy is provided to the server by the leaf layer. If you lose leaf-01, Leaf-02 should continue the server connection mechanism to keep the network functioning.

Network Infrastructure

When I first started learning VXLAN, it took me a while to focus my mind on underlay and overlay. Explaining this to my colleagues and clients was also a challenge. Luckily, I learned the perfect simile to explain it.

Let’s look at the VXLAN infrastructure and liken them to roller. The air tren has rails, engines, and brakes that are infrastructure. Roller subsurfaces are vital components that host and serve the built-in.

Now let’s compare it with VXLAN. In the VXLAN infrastructure, physical links between leaves and spins (rails) are connected to allow customer traffic (roller cars and riders) to move on the Fabric and reach their destination. An essential aspect of infrastructure is equal-cost multipath multi-routing (ECMP) in the bonds between leaves and spins. ECMP uses leaf-to-column active links for traffic flow. The above approach is somewhat similar to the aggregation of links in L2, but you do it based on the L3 architecture.

Coverage Network

A cover (cars and trans riders) is where VXLAN’s advantage over traditional networks shows. VXLAN offers a downtime tackle feature, especially in conjunction with multi-tenant-based architectures. Using the multi-tenant model, you can run different client networks utilizing a fabric. The tenant refers to a virtual grid inside the same VXLAN fabric, which is one of the main advantages of software-defined networks (SDN) or software-driven networks. Compared to the air tren, tenants are air-to-air vehicles. Each car (tenant) carries a group of passengers (let’s connect passengers to VLANs), and only passengers (VLAN) inside the exact vehicle (tenant) can talk to each other.

Here, I mentioned ECMP and how to use each leaf’s links to different columns. ECMP provides the advantage of simultaneous use of two rails (the connection between leaves and spins) for the trolling machine (tenant). Air tren cars (tenants) can simultaneously move on two rails (links) faster. If a fence (link) breaks down, the car still has another rail to continue on the track. The figure below shows this topic.

Traffic flow of the spine and leaves

Now that we have a clearer picture of the VXLAN fabric components, we need to see how the Fabric works and what VXLAN needs to communicate in infrastructure.

Broadcast Unknown Unicast and Multicast (BUM Traffic)

Since the L2 frames in VXLAN are enclosed in L3, you stop the playback mechanism at the fabric level. All broadcasting means that a network gets information about its connected devices, but how does VXLAN get that information when the all-broadcast model has stopped playing? Here, a multiplayer approach is used. BUM traffic uses three types of messaging mechanisms to communicate on a network: broadcast, unicast, and multicast. Multicast is an alternative to all playback that can use L3 to publish information.

Underlay Multicast

Now that you know that multicast replaces the all-playback model, you should also be aware that the multicast architecture should implement in the underlying layer. How is this process done? You specify a multilayer prefix to map multiplay groups to ID (VNI) in an extended virtual local network. There is a multiplayer group in each VNI. Multiplayer messages are sent to an appointment point, which you usually assign to columns.

Sublayer Routing

Subsurance routing in VXLAN is essential for fabric fabric fabric manufacturing. A dynamic routing protocol such as OSPF or IS-IS is designated as the Internal Gateway Protocol (IGP) to create a neighboring peer-to-peer process for all leaf-to-spine physical links.

Once the above mechanism is activated, by implementing the BGP protocol above OSPF or IS-IS, the BGP EVPN becomes this significant realized and operational. You create a Loopback address on each switch (spin and sheet) and place its information in OSPF to use that address as the same address as BGP. How do you do this? Pay attention carefully.

The first step is to bring OSPF or IS-IS between leaf and spine links, configure a loopback interface for each device, and provide the information needed for OSPF in the IGP protocol.

The next step is to use the back loop interface to match the BGP at the top of the OSPF by placing the underlying routing. Leaf-01 has two valid paths to spins, and then the BGP between them is checked using a back loop. Here the operations are carried out as follows:

BGP Neighbors    Leaf-01     Spine-01

                             1.0.0.10    1.0.0.100

BGP Neighbors    Leaf-01     Spine-02

                              1.0.0.10    1.0.0.200

BGP Neighbors    Leaf-02     Spine-01

                              1.0.0.20    1.0.0.100

BGP Neighbors    Leaf-02     Spine-02

                              1.0.0.20     1.0.0.200

After accessing the back loop interfaces to BGP and peer-to-peer, a connection is established between the devices. Once you’ve done this BGP peer-to-peer, you’ve prepared the BGP infrastructure necessary to send traffic with VXLAN EVPN.

Overlapping routing

You’ll need a basic configuration to allow a tray to move the machines. In the term VXLAN, you have to prepare everything from the infrastructure to run VXLAN to the fabric tenants on top of which are called constructions.

For this reason, VXLAN must run an overlapping path through a mechanism. We create the EVPN neighborhood between the leaves, so why not do it for the spine? The spine does not resize L2 frames from/inside VXLAN. Spins work to aggregation and as reflectors of the BGP pathway.

You create a virtual network interface (NVE) and use it to encapsulate and encapsulate L2 frames. VXLAN uses the source back loop interface dedicated to NVE to create a VXLAN tunnel between the leaves.

Note that the back loop assigned to NVE is not the same as what is used to match the BGP. So, you can use it better, but it’s best to have a specific loopback for VXLAN NVE purposes. All client traffic must pass through the tunnel when it reaches the Fabric through each leaf. So that traffic is passed between the leaves and through the tunnel using the EVPN fabric.

Of course, as mentioned, spins are not visible as a tracking mechanism and only see the leaf gates of origin and destination. The figure below shows that LoopBack 0 is the peer back loop interface for BGP EVPN. Here, you can create another loopback to get VXLAN tunnels between the leaves…

Let’s select Loopback 1 as NVE Loopback. When we do this, we can effectively align NVE’s. The interface address loopback one is called the endpoint of the VTEP virtual tunnel called Virtual Tunnel End Point.