Loading, please wait...

A to Z Full Forms and Acronyms

What is Distributed Switch ?

In This Article, we'll discus What is Distributed Switches

Distributed Switches

Three VMs above three ESXi hosts and three servers; one virtual distributed switch is between the VMs and ESXi hosts

vSphere Distributed Switch (or vDS) acts as a single switch across all associated hosts in a data center and provides centralized provisioning, administration, and monitoring of virtual networks. You configure a vDS on the vCenter Server Appliance (vCSA), and the same settings are then added to all ESXi hosts that are associated with the switch. This lets virtual machines maintain consistent network configuration as they move (or migrate) from one host to another. Each vCenter Server system can support up to 128 vDSs, and each vDS can manage up to 2000 hosts. Each vDS can also support up to 10,000 port groups.

A vDS uses the physical NIC's of the ESXi host on which the VMs are running to connect them to the external network.

With a vDS, policies can be set for each individual port, not just for whole port groups. You can also block all ports in a distributed port group (though this might disrupt the normal network operations of the hosts or virtual machines using the ports).

Whereas a standard switch contains both data and management planes, and an administrator configures and maintains each switch individually, a vDS eases this management burden by treating the network as a pooled resource. Individual vSwitches are abstracted into one large vDS spanning multiple hosts across the data center. The data plane remains local to each vDS, but the management plane (the vCenter Server in this case) is centralized, which streamlines and simplifies VM network configuration. The vDS also provides enhanced network monitoring and troubleshooting capabilities, including templates to enable backup and restore for virtual networking configuration.

The data plane section of the vDS is called a host proxy switch. The networking configuration that you create on a vCenter Server Appliance (the management plane) is automatically pushed down to all proxy switches (the data plane). Proxy switches support:

  • network traffic between virtual machines on any hosts that are members of the distributed virtual switch
  • network traffic between a virtual machine that uses a distributed virtual switch and a virtual machine that uses a VMware standard virtual switch
  • network traffic between a virtual machine and a remote system on a physical network connected to the ESXi host.

NSX-V requires the use of vDS. NSX licensing entitles customers to the vDS. NSX-T comes with its own vDS type: the N-VDS, or NSX Managed Virtual Distributed Switch. As we’ll see later in the course, NSX-T can be deployed without a vCenter server. This means that an N-VDS switch is similarly not dependent on vCenter (as a vDS is) and can be used in a variety of cloud environments. It can provide network services to VMs running on both ESXi hypervisors and KVM hypervisors (which are built into Linux kernels).

On an ESXi hypervisor, N-DVS is implemented via the NSX-vSwitch module that is loaded into the hypervisor’s kernel. On a KVM hypervisor, N-DVS is implemented by the Open-vSwitch (OVS) module for the Linux kernel.

The primary purpose of an N-VDS is to forward the traffic that runs on transport nodes. Transport nodes are hypervisor hosts and NSX Edges that will participate in an NSX-T overlay. When you add a transport node to a transport zone (which defines the span of an NSX-V or NSX-T virtual network), the N-VDS associated with the transport zone is installed on the transport node. Each transport zone supports a single N-VDS which must have the same name as the transport zone. There are two types of transport zone: an overlay transport zone and a VLAN transport zone.

Remember that an N-VDS:

  • can only attach to a single overlay transport zone
  • can only attach to a single VLAN transport zone
  • can attach to both an overlay transport zone and a VLAN transport zone at the same time; in that case, both transport zones and the N-VDS will have the same name.

Multiple N-VDSs and vDSs can coexist on a transport node; however, a physical NIC can only be associated with a single N-VDS or vDS.

In the logical view of nine VMs on a logical switch, VM1 is connected to VM5. In the physical view, two switches are each connected to three stacks. All three stacks contain two routers above three transport nodes. Each transport node has a TEP and a VM. In stack 1, VM1 is in transport node 1 which has TEP1. In stack 2, VM5 is in transport node 5 which has TEP5. Data from VM1 goes from TEP1 to a router in stack 1 to a switch to a router in stack 2 to TEP5 in transport node 5 to VM5.

In the upper part of the diagram above, the logical view consists of eight virtual machines that are attached to the same logical switch, forming a virtual broadcast domain. The physical representation, at the bottom, shows that the five virtual machines are running on hypervisors spread across three racks in a data center. Each hypervisor is an NSX-T transport node equipped with a tunnel endpoint (TEP). The TEPs are configured with IP addresses, and the physical network infrastructure provides IP connectivity between them. An NSX Controller (which we’ll discuss later in the course) distributes the IP addresses of the TEPs so they can set up tunnels with their peers. The example shows VM1 sending a frame to VM5. In the physical representation, this frame is transported via a tunnel between transport node HV1 to transport node HV5.

The GENEVE overlay encapsulation protocol used by NSX-T provides better throughput (i.e., data transfer in a set amount of time) and uses fewer CPU resources.

A to Z Full Forms and Acronyms

Related Article