Saturday, February 19, 2011

Where to place VSM and vCenter

With Nexus 1000v, VSM and vCenter can run as VM under VEM, but that doesn’t mean they always should.

VSM is the “supervisor” for VEMs (virtual line cards). It also communicates with vCenter which is the central management and provisioning center for Vmware virtual switching.

As a network designer, we will need to work with host team to determine VSM’s form factor:
  • As a VM running under VEM (taking a veth port)
  • As a VM running under a vSwitch
  • As a separate physical machine
  • As an appliance (Nexus 1010 VSA)
As you can see, options range from complete integration in the virtualized environment, to complete separation, at increasing cost. Arguably, in a large and complex virtualization environment, the advantage of having separate control points will become more apparent. Here we briefly touch on two practical considerations.
Failure Scenarios
When everything works, there is really no disadvantage having VSM and vCenter plugged into a VEM. In theory, VSM can communicate even before VEMs are boot up, through control and packet VLANs which should be system VLANs. However, it could become a lot more complex to troubleshoot, when something is not right. For example, misconfiguration on vCenter leading to communication failure, software bug on the Nexus 1000v leading to partial VLAN failures, having a faulty line card with packet drops.

The point is, if there is a failure, we want to know quickly if it is in the control plane or the data plane. We often rely on the control plane to analyze what is going on in the data plane. Mixing VSM with VEM increases the risk of having control plane and data plane failure at the same time, making root cause isolation more difficult. However unlikely we may think, failure scenarios could happen. When it does, having access to VSM and vCenter is essential to troubleshooting and problem isolation. We know VEM does not rely on the availability of VSM to pass packets; however having VSM under VEM essentially places it under the same DVS that it manages, therefore subject to DVS port corruption error as an example. When a VEM fails, imagine losing access to VSM and vCenter as well because they are running under it.
Administrative Boundary
VSM and vCenter, due to their critical nature, needs to be protected. To prevent administrators from mistakenly change vCenter and VSM while making changes to other VMs, there should be as much administrative boundary established as the infrastructure supports.
Having VSM and vCenter in a separate control cluster with dedicated hosts creates clear administrative boundary. The use of a Vmware virtual switches (vDS) instead of VEM for vCenter and VSM will further decouple dependency. The vDS should be clearly named; its special purpose will be understood by all administrators, therefore minimizing the chance for mistakes.
The diagram shows a sample of placing VSM and vCenter as VMs on a separate control cluster separate from the applications VM they manage.

No comments:

Post a Comment