LACP and vSphere (ESXi) hosts: not a very good marriage

I receive a lot of questions from customers if they should implement LACP rather or not, so without further ado:

In this blog I’m going to talk about if it is a good idea to configure LACP between your ESXi hosts and the physical switched network? The Link Aggregation Control Protocol delivers enhanced features for a Link Aggregation (LAG), ensuring a stable connection between 2 network devices over multiple physical links by exchanging LACP-BPDU’s. Sound good right? And yes, from a network perspective it surely does! But from an vSphere ESXi hosts there are other “things” to think about, which I will cover in following chapters.

ESXi virtual switches and LACP

ESXi has 2 options when it comes to virtual networking: the vSphere Standard Switch (VSS) and the Distributed vSwitch (DVS). The VSS is ESXi local, which means it can only be managed from the ESXi host itself. Where the DVS can only be managed through the vCenter server, its configuration is being distributed to all connected ESXi hosts using Host Proxy Switches. The DVS offers several improvements over a VSS, like (for example) LACP support. The VSS does not support LACP.

When configuring LACP you have to configure the LAG on the DVS and have to manually add the physical NICS (pNIC or VMNICs) of each individual host to LAG uplinks (or you can do it scripted as you should).

The amount of uplink ports in a LAG have to be configured globally and this amount of LAG-uplinks are distributed to all Host Proxy Switches, which means that when 2 LAG uplinks are configured at the DVS/vCenter server level all hosts connected to that DVS will receive two LAG uplinks. The connected vSphere ESXi hosts cannot deviate from that amount of LAG uplinks.

The LAG as a logical link is being handled as an DVS-uplink itself, so you can use it to load-balance traffic between normal DVS uplinks and LAG uplinks.

The benefit is that LAG as logical link can utilize all the available bandwidth and you can add additional bandwidth by adding additional physical links. It can also help in case of failed physical connection, the connection will automatically be detected by LACP and will be thrown out of the logical link. This is what we call a Layer 2 high availability solution: the logical path is controlled by LACP and automatically scales when needed creating an optimal path between two devices.

Let me clarify the “logical link can utilize all the available bandwidth” feature: LACP is using IP Hashing for its load balancing algoritme, which means that a single network flow cannot exceed the bandwidth of a single physical connection.

In the worst case (example) you can end up having two elephant flows and one mice flow: The two elephant flows can reside on the same physical link and the mice flow on the other. The two elephant flows will have to share the available bandwidth of the same physical connection, resulting in poor network performance, while the mice flow has enough bandwidth available.
That’s just the nature of IP hashing.

Comparing Virtual Port ID and LACP

A DVS and/or VSS-switches offers multiple load-balancing options: by default load balancing based on Virtual Port id (sometimes called Source-MAC pinning) is being used on the VSS and DVS. It has the same drawback as IP hashing, but the good news is that having this type of load-balancing is that you do not have to configure the physical switch for layer 2 availability (LACP/IP hashing): A Virtual Machine is pinned to an uplink or an interface and it will stay there, as long as no failure occurs. When a failure on the pNIC occurs, the VM will be pinned to another available pNIC and (when configured properly). a RARP packet is being send to inform the physical switch so it can learn the MAC-address on this new interface, minimizing the outage time.

So let’s compare LACP/IP Hash and Source-MAC pinning/virtual port-id: the both offer the possibility to utilize the available bandwidth and both offer a form of physical interface resiliency, at the downside is that for LACP you need to manually configure LACP bundles on the physical switch AND on the DVS AND manually add the pNICs to the LAG (read: a lot of manual, prone-to-error configurations).

Compare LBT and LACP

A DVS offers the possibility to use the Load Balance Teaming (LBT)-load balancing option (= a good word for scrabble). You can see LBT as the enhanced Virtual Port ID load balancing option: It acts the same but with one mayor difference: It monitors the bandwidth utilization every 30 secondes and re-distributes the MAC-addresses (VMs) over the available pNICs when the bandwidth utilization is above the 75%. This will spread the bandwidth utilization over all available pNICs evenly.

Monitoring LACP on an vSphere ESXI host

From a physical switch perspective you can utilize command like

show port-channel brief

to view the status of a LACP port channel:

result of show port-channel equivalent command

But from a vSphere ESXi perspective it’s a little bit harder.
The LACP configuration is being provisioned from the vCenter and is distributed to the proxy switch hosting on the ESXI host. The LACP port channel status has to be monitored on the ESXi host itself.

You cannot check the LACP status from the GUI, you have the SSH into the ESXi host (which isn’t a best practice at the first place) and then you have to execute the following command:

esxcli network vswitch dvs vmware lacp status get

and get a result like this:

result of the LACP STATUS GET command

Not a very nice overview if you ask me.

You can state that monitoring the LACP LAG isn’t easy with VMware ESXi. The reason behind this is that LACP is only be integrated within ESXi because network admins just keep asking VMware to implement LACP. It just have been implemented as an afterthought as VMware ESXi offers good alternatives.

Conclusion.

With the available load-balancing algoritmes available in vSphere there is no need for an LACP configuration, as VMware offers good (and some better) options from a configuration, utilization and availability perspective.

So why do some customer still use LACP you might ask? Usually it is because of the a lack of (VMware-) knowledge and add up the good experiences with LACP from the past by network admins.

LACP between switches and bare-metal servers are still a very good option, but VMWare offers some enhancement which neglect the use of LACP for vSphere environments.

Advertenties

Geef een reactie

Vul je gegevens in of klik op een icoon om in te loggen.

WordPress.com logo

Je reageert onder je WordPress.com account. Log uit /  Bijwerken )

Google photo

Je reageert onder je Google account. Log uit /  Bijwerken )

Twitter-afbeelding

Je reageert onder je Twitter account. Log uit /  Bijwerken )

Facebook foto

Je reageert onder je Facebook account. Log uit /  Bijwerken )

Verbinden met %s