Quantcast
Channel: VMware Communities: Message List
Viewing all articles
Browse latest Browse all 232869

Re: New !! Open unofficial storage performance thread

$
0
0

Here are the results using the MS iSCSI target instead of StarWind, IOMeter run within the same test Windows 7 64-bit VM:

 

Access SpecificationIOPSMB/sAvg Resp Time (ms)
Max Throughput-100%Read3,353.83 (NIC on SAN almost saturated)104.8117.83
RealLife-60%Rand-65%Read1177.839.2051.24
Max Throughput-50%Readsaturates NIC on SAN

Random-8k-70%Read1040.728.1357.96

 

As far as I know, the MS iSCSI target does no caching, so these results suggest that the 1GB Starwind cache provides little if any benefit.

 

Based on the "native" performance for max throughput (both 100% read and 50/50 read/write), the network throughput is clearly the limiting factor. With over 26,000 IOPS and over 800 MB/s throughput for reads, I would need 8 NICs in order to keep up! Pretty crazy...

 

I had planned on using (2) pNICs on the SAN and on each ESXi server and then have two iSCSI subnets and bind the vmknic to a single pNIC. Basically, a standard MPIO setup with round robin and switching between storage paths with each IO operation.

 

Can anyone out there advise on what's involved in using more than (2) pNICs with MPIO? Is it just a matter of creating additional vmknics and then binding each to a pNIC? Is it worth bothering?

 

About the only sequential IO in our environment that would generate sustained iSCSI traffic would be PHD backups (virtual fulls) and I'm guessing that a good chunk of the job time is spent on hashing, compression, and verification rather than on IO operations, so backup job times likely wouldn't be reduced by a huge amount with faster disk throughput.

 

Any thoughts?


Viewing all articles
Browse latest Browse all 232869

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>