• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

SSD scaling above 2 drives?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
I'm Brent Ozar, the guy with the benchmarks you linked to earlier. I've got a couple of FusionIO Duos in a dev server at the moment - if there's any particular questions you've got about them, let me know and I'd be glad to help.
I've been sharing your article with everybody I talk with about this stuff. :) It got me totally jazzed on this new tech. We're a small software company that suffers from the growing pains that make it hard to scale our hardware appropriately, and the DB servers are always under-powered. Yet people (managers) wonder why our customers aren't happy, etc.

In your article you mention troubles with cards dropping out of RAID. Is that still a concern? Or was it solved during your testing in the FusionIO datacenter, and so can be chalked up to some bad hardware in the server or whatever?

In your recommended scenarios list I'm not really seeing the full recommendation to fully go with the strategy of running all your databases from these cards. Yet there are several case studies of FusionIO customers doing just that. Have you come around to that line of thinking yet (your article *was* quite a few months ago)? If not, what is holding you back and where would the tech need to go for you to subscribe to that strategy?

As our company struggles to grow, investing in a SAN to get the hardware performance we really need is not yet feasible. Changing to a PCIe SSD strategy and targeting the servers we need the most performance from seems to me to be the better choice for us, as we can buy servers that provide smoking performance for a lot less than it would take to step up to a SAN to get that same level.

I'm a software dev, not our IT guy, btw. But our IT guy isn't fully up to speed on this whole side of things... he has his sights set on a SAN, even though we can't afford it yet.

EDIT: Forgot to ask if you've done or are planning/thinking of doing any testing with the OCZ cards to compare their viability as the budget choice to the FusionIO premium choice. I was excited to see that Dell sells (and supports) the OCZ cards because our CIO choked on the price tag of the FusionIO cards.
 
Last edited:

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
That's the kind of information I'm looking for - have you got a link to the tests showing scaling?

I don't but look at the block diagram on the X58 chipset w/ ICH10R:
http://www.intel.com/Assets/PDF/prodbrief/x58-product-brief.pdf

You'll notice that the ICH10R has a 2GB/s DMI link... with 6 PCIe x1 lanes available, audio, LAN and 12x USB... in addition to the 6 SATA 3.0gbps ports... That 2GB/s is made up of 1GB/s up and 1GB/s down. So, you basically cap yourself somewhere around 1GB/s assuming you have nothing else connected and using a part of that link. Hence why it would be really hard to hit 1.7GB/s sequential read when you have at most ~1GB/s to the northbridge. The ICH10R has been around for a long time, and the next variant will surely be faster supporting SATA 3 and USB 3.
 

kalniel

Member
Aug 16, 2010
52
0
0
Okay yup, I can see how 6x in RAID 0 would be bottlenecked, but the question is how far can you go before you bottleneck, and how much do other devices affect that? If you're leaving 1GB/s as you suggest, then 3x should be no problem, and there might still be some benefit from 4x.

And that's just hanging them off the ICH10R. What happens if you hang PCIe storage cards off the graphics slots/IOH instead? Can they work straight with the CPU/RAM without having to touch the DMI connection to the southbridge?
 

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
And that's just hanging them off the ICH10R. What happens if you hang PCIe storage cards off the graphics slots/IOH instead? Can they work straight with the CPU/RAM without having to touch the DMI connection to the southbridge?

At that point you are at the "sky is the limit" point. On a LGA 1156 platform you are going directly to the CPU PCIe controller and on LGA 1366 you go to the X58 chipset (assuming you are not hanging off the six ICH10R lanes for some reason) and QPI (think over 12x faster than DMI).

Do a Google/ Bing search for "ICH10R 660MB/s limit" my Gigabyte X58-Extreme has the 660MB/s limit for the SATA ports so I have seen it real world.