Hi all,
I'm writing today to discuss my proposed SAN and network design for an upcoming project. I'm looking for any suggestions and info in regards to this design. The key technologies that I hope to incorperatate are:
Mellanox 40GB Infiniband (ConnectX2/3 Infiniscale IV/SwitchX 2)
Windows Storage Spaces
Windows Scale-Out File Server Cluster
RDMA
The SAN is going to be a 3 node Scale-Out File Server using Windows Server 2012 Storage Spaces with a DataOn DNS-1640D JBOD appliance. Below is a link to the general setup. SSDs will be used for the disks.
We will be using LSI 9207-8e HBAs to attach to the DNS-1640D. We are looking to use a 40GB RDMA solution to create the 3 node Scale-Out File Server Cluster. From there we would need to connect the SAN Cluster to our Blade Servers using a 40GB Switch. We are currently using Supermicro TwinBlades model SBI-7227R-T2. Currently Sumermicro offers a 40GB 4x QDR switch for the blade enclosure model SBM-IBS-Q3616M. It is based on the InfiniScale IV silicon. The blades can use their Mezzanine cards model AOC-IBH-XQD based on the ConnectX-2 silicon. The plan is to connect the Supermico blade enclosure switch to larger 40GB Mellanox switch to sit between the SAN and Blades.
The SAN Cluster will be all physical systems with no virtualization while our Blades all run VMware ESXi 5.1 with Windows Server 2012 VMs. I am not sure if RDMA can be achieved between physical to virtual environments such as our SAN Cluster to Blades. I also don't know much about 40GB stuff in VMware. I know that VMware is working on a Paravirtual RDMA based solution but I don't think it is available yet. I believe you can use the Pass-Thu method to a VM or SR-IOV to assign functions to VM, but I don't know much about implementing either of these and Pass-Thu would not be an option for our environment.
We have been thinking of the SX6018 model switch form Mellanox to siit between the SAN and Blades. As for the SAN adapters we were thinking of the MCX314A-BCBT or the MCX354A-FCBT. I'm also not sure if these adapters and switch will work correctly with the Supermicro products as they are based on ConnectX-2 and InfiniScale IV instead of ConnectX3 and SwitchX-2 silicon.
If anyone has had any experience with Mellanox on VMware and or RDMA I would like to hear what they learned and what worked or didn't. Any general suggestions and information that anyone has to share about my design or ideas would be great. Also the SAN is not going to be used to store the VMs or VMDKs, it's just for other data that the VM guest OS's will access.
Thanks!
Chris