Thanks for your input.
We upgraded to 4.1 U2 some time ago and have now began to upgrade to 5.0.
The hosts are a mix of Dell servers, mostly R900's with a couple of R710's and 810's as well. There are 7 hosts in 2 production clusters with 3 additional hosts in a test/pre-production cluster. Most have 8 nic ports available but a couple only have 6. At this point though, the limiting factor is switch ports. There aren't enough to provide another redundant connections for the hosts in either cluster.
Our network is 1 Gb Ethernet.
We have iSCSI connection to shared storage. It is running thru the Dell MEM extension using the DELL_PSP_EQL_ROUTED Path Selection method.The iSCSI connection is a dedicated vlan running across 2 physical nics configured as a standard vswitch. There a 2 iSCSI VMkernel ports, each with one of the physical nics active and the other unavailable.
The guest network also has 2 uplinks for each host. They are configured as part of a vSphere Distributed switch.
Management and vMotion traffic share a vSwitch. It has 2 physical nics and 2 VMkernal ports, one for vMotion and the other for Management traffic. Vmk0 is active and vmk3 is standby for the Management port. Vmk0 is standby and vmk3 is active for the vMotion port. Both of these traffic flows are on the same, isolated vlan.
For the short term our connections are limited by switch ports so I'm not able to add connections to the hosts to separate vMotion and Management. Obviously the solution of adding a nic and configuring as you described is prefered but, since ports aren't available, is traffic shaping, on the vMotion and Management port groups, a good second choice?
Thank you