Using Multi-NIC vMotion is a No Brainer

Multi-NIC vMotion was released 3+ years ago in August 2011 included with vSphere 5.0. Even through it has been available for a long time, I wanted to write a post to highlight how useful a feature it is (especially for 1-Gig networks) and show how simple it is to setup (it’s a no-brainer).

With Multi-NIC vMotion, hosts can use (2) or more NICs to load balance vMotion traffic without additional configuration needed on the physical switches (no LACP or  Etherchannel). This allows each vMotion operation to using the total aggregate bandwidth of all combined links even when migrating a single VM.

For example, I tested performing a vMotion with a VM configured with 20 GB of RAM on a 1-Gig network. Using (2) 1-Gig NICs was about 84% faster compared to using just 1 (not quite double the speed).

Multi-NIC vMotion uses intelligent load balancing at the hypervisor layer similar to the concept of load balancing for the iSCSI software initiator.

The best resource I could find about this topic is on Frank Denneman’s page, frankdenneman.nl, who is also the author of the legendary vSphere Clustering Deepdive book. I pasted an image below from his site which is worth a thousand words to show how this Multi-NIC vMotion works:

10 12 2014 2 02 40 PM

Setup and Requirements:

  • To set this up, simply configure 1 vMotion (VMKernel) port group for each physical NIC you would like to use.
  • Configure additional NICs on this port group within the vSwitch to Standby.
  • All IPs for all vMotion port groups must be on the same subnet and VLAN (not routed).
  • It is common for these vMotion port groups to share NICs and vSwitches with the management port group.
  • This can technically be setup without putting the host into maintenance mode and without downtime (consider your organization’s change control policies).
  • This is supported on both Standard and Distributed vSwitches.
  • Supports up to (16) 1-Gig NICs or (4) 10-Gig NICs per host.
  • The same concurrent vMotion limits apply from vCenter regardless of the number of NICs used (4 concurrent for 1-Gig networks and 8 for 10-Gig)
  • Consider the benefits for DRS, maintenance mode and for Enhanced vMotion (feature in Essential Plus 5.1+ where you can move a VM’s compute and storage to a different host simultaneously using the web client).

Here is how a vSwitch would look with (2) vMotion port groups:

10 11 2014 4 16 30 PM

Configuring Multi-NIC vMotion with at least (2) NICs on a 1-Gig network is a no brainer, and still a good idea on 10-Gig networks.

This simple and very useful feature can help SMBs with 1-Gig networks add some additional functionality and value to the environment if making the jump to 10-Gig isn’t necessary or cost effective.

Have a question for Todd? Email us at .

Author: Todd Bey

Follow Us:

EQ Linked-In | VMware Chicago IT | Equilibrium IT | EQInc.com  EQ Facebook | VMware  | Chicago IT | Equilibrium IT | EQInc.com  EQ Google+ | VMware | Networking | Chicago IT | Equilibrium IT | EQInc.com  EQ Twitter | VMware | Networking | Chicago IT | Equilibrium IT | EQInc.com

 

Questions?  Call us today in Chicago at 773-205-0200 | Email us at  | Request a FREE Consultation

 

Contact Us

Call today! 773.205.0200 or use the form below.