- Dec 15, 2015
- 14,110
- 12,210
- 146
Wasn't quite sure where to put this since AT doesn't have a 'professional IT' section, but storage fits well enough.
Currently working with vendor(s) on this, but my company recently ordered a new SAN from a respected SAN vendor, in the $50k range. Not a budget one but not a $200k rack-o-disks either.
We've had a persistent problem with it for about two-three weeks now, specifically in how it handles thin vs thick provisioned LUNs within VMware which is preventing us from putting it in production. Basically, with a thin provisioned LUN of arbitrary size on both RAID6 and RAID5 we get about 50% of the read/write speeds as we do with thick provisioned. I've never encountered this before, and in fact the only thing I've ever been schooled on regarding thin vs thick was that in ye olden dark days, thin provisioned volumes/luns/whatever could suffer a moderate degradation in write speed, which makes sense. This has for the most part gone to the wayside though as far as my experience has shown me.
So anyhow, poor performance with thin provisioned LUNs regardless of raid pool type, primarily attributed to read speed latencies. This was tested via simple methods, initially file copies within a VM to itself, vmotioning (this is a VMware build), some SQL backups, junk like that. Also employed IOMeter to derive info regarding choke points.
Build is 2x Dell hosts with 2x connections each, connected via 10Gbe fiber iscsi through a fiber switch, to host with 2x connections as well. Tested with/without multipathing, with/without fiber switch involved, swapped fiber cables, and fiddled with a handful of esxi settings with varying results.
Anyone seen abnormally high latency in scenarios vaguely similar to the above? Right now we're banking on some kind of firmware bug wrt interaction with esxi 6, but the vendor engineers haven't popped their heads up yet.
Currently working with vendor(s) on this, but my company recently ordered a new SAN from a respected SAN vendor, in the $50k range. Not a budget one but not a $200k rack-o-disks either.
We've had a persistent problem with it for about two-three weeks now, specifically in how it handles thin vs thick provisioned LUNs within VMware which is preventing us from putting it in production. Basically, with a thin provisioned LUN of arbitrary size on both RAID6 and RAID5 we get about 50% of the read/write speeds as we do with thick provisioned. I've never encountered this before, and in fact the only thing I've ever been schooled on regarding thin vs thick was that in ye olden dark days, thin provisioned volumes/luns/whatever could suffer a moderate degradation in write speed, which makes sense. This has for the most part gone to the wayside though as far as my experience has shown me.
So anyhow, poor performance with thin provisioned LUNs regardless of raid pool type, primarily attributed to read speed latencies. This was tested via simple methods, initially file copies within a VM to itself, vmotioning (this is a VMware build), some SQL backups, junk like that. Also employed IOMeter to derive info regarding choke points.
Build is 2x Dell hosts with 2x connections each, connected via 10Gbe fiber iscsi through a fiber switch, to host with 2x connections as well. Tested with/without multipathing, with/without fiber switch involved, swapped fiber cables, and fiddled with a handful of esxi settings with varying results.
Anyone seen abnormally high latency in scenarios vaguely similar to the above? Right now we're banking on some kind of firmware bug wrt interaction with esxi 6, but the vendor engineers haven't popped their heads up yet.