Intel's SSD 910: Not that impressive

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
OCZ being cheap or widely available doesn't make it not a strawman.
I disagree. Being in the same price range, and not hiding pricing and availability behind someone like EMC, made OCZ the only meaningful competition at the time of my post. When there is only one brand, and that brand is OCZ, it's not a strawman to compare them to the near-future market entry of Intel. It is the reasonable comparison to make.

It's the difference between selling hardware and selling your sales rep. Right now there are not dozens of other brands, so much as there are dozens of companies trying to compete with the likes of IBM and EMC at providing you "solutions" and the like. Yes, there are other brands, but the middlemen you choose to use matter. If Intel acts like the Intel we know and love/hate usually acts, they will use that lack of much open competition to their advantage, and sell them much like OCZ sells their Velodrives (which support HW RAID, but make no secret that they are SATA drives on a card).

There is now Fusion-io, as well, though. I just found that Intel's announcement has conveniently coincided with Fusion-io announcing a newer cheaper card. Fusion-io also does not hide pricing, and there are plenty of easily accessible resellers. Just that most everything they have out already is not nearly in the same price range. Quite a convenient coincidence for Intel, isn't it? I mean, to announce an upcoming PCI-e SSD product on the same day as a competitor, that doesn't have nearly the mindshare, announces one? It's almost like Intel planned to overshadow Fusion-io's announcement with their own. :sneaky:

It wouldn't surprise me if they plan on a more affordable, server card line to follow it up, too.
I clearly explained that could get SATA based on SSDs instead with no loss.
This so called "PCIe SSD" is a 4 port SAS non raid controller attached to 1 or two daughter boards, each daughter board is 2 SATA SSDs which plug into the SAS controller.
20+ drives takes quite some room, 5+ cables (20+ if you fan out), and additional controller cards. I understand being SAS drives on cards and not supporting HW RAID leaves out potential customers, but I don't get how that makes it any less a PCI-e SSD. It plugs into a PCI-e slot, with nothing needed beyond the card, and has what is necessary to take advantage of the extra bandwidth PCI-e provides.

If you're fine with 20+ drives, several controller cards, and 6+ cables to go to go with them, why would you bother with PCI-e SSDs in the first place? In that case, nobody's PCI-e SSD will offer any value.
 
Last edited:

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
I looked around and I don't see eMLC product except from intel, which has this offering:
http://www.newegg.com/Product/Produc...82E16820167079

Which is cheaper but slower. yet comes with built in AES 128-bit hardware encryption.

The product disappoint me because intel seems to have taken the approach of "no competition, lets be lackluster in every possible way" and it will cost them if we see other companies enter the eMLC market.
There is a competitor to the 710 from Samsung, it just doesn't look like it's been advertised well and hasn't had a mention on AT nor available from any of the usual e-tailers.

http://www.storagereview.com/samsung_ssd_sm825_enterprise_ssd_review
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I disagree. Being in the same price range, and not hiding pricing and availability behind someone like EMC, made OCZ the only meaningful competition at the time of my post.
Again you quote me and then disagree with statements that don't exist in the quote.

When there is only one brand, and that brand is OCZ, it's not a strawman to compare them to the near-future market entry of Intel. It is the reasonable comparison to make.
Ah a valid argument... which makes the assumptions that the buyer is dead set on a PCIe card style implementation and on eMLC... neither of those assumptions are justified in my awesome opinion.
As a corporation you cannot lock yourself to a specific form factor or technology but seek out the best products available on the market at the best price.
If PCIe card based products don't make sense, then buy a non PCIe solution.

20+ drives takes quite some room, 5+ cables (20+ if you fan out), and additional controller cards. I understand being SAS drives on cards and not supporting HW RAID leaves out potential customers, but I don't get how that makes it any less a PCI-e SSD.

This is a PCIe 8x card which contains 4 drives.
You have, at most, 3 compatible slots per intel CPU (32 lanes only)... in reality mobos have no more then 3 such slots (I think I have mobos with 4 slots but they were operating at lower speeds).
20 drives is impossible with this configuration.
Furthermore, 20 SSDs @ 1.8" don't take up that much room.

If you are using 2, 4, 6 or 8 drives per chasis then it will take SLIGHTLY more room in the chassis (same room in datacenter). But still so little as to not be an issue.
If you are using 10 to 12 it will require a rather rare mobo and be the same as above situation.
If you are using 14+ then you are actually saving space because you were able to cram 20+ drives 200GB each in a chasis that would have been stuck with only 12 drives otherwise (the max using intel's PCIe solution)
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Again you quote me and then disagree with statements that don't exist in the quote.
That comparing them to the only other maker I was aware of that sold drives near their price point with the same kind of direct pricing and distribution was a strawman was very much in the quote, and that's what that was about.

Ah a valid argument... which makes the assumptions that the buyer is dead set on a PCIe card style implementation and on eMLC... neither of those assumptions are justified in my awesome opinion.
As a corporation you cannot lock yourself to a specific form factor or technology but seek out the best products available on the market at the best price.
If PCIe card based products don't make sense, then buy a non PCIe solution.
No argument there. I'm going under the assumption that if you can easily use SATA or SAS SSDs, you wouldn't be looking at PCI-e. Being it thermal/airflow, IOPS/volume, IOPS/Watt, whatever, that obvious option must already have some serious downsides. It makes no sense at all compared to a small handful of 2.5" SAS drives. Only using 1 or 2 such cards, as well, doesn't seem like it would make sense except in oddball form factors, which is not Intel's style.

I also suspect Intel has some specific types of users in mind (some of which have probably helped them develop and test these things), if they're exposing the drives to software, instead of making them single drive IOPS beasts, hiding that it's made of up to 4 SAS drives underneath.

You have, at most, 3 compatible slots per intel CPU (32 lanes only)... in reality mobos have no more then 3 such slots (I think I have mobos with 4 slots but they were operating at lower speeds).
In under a minute, I easily found 6 slots, though 4 is more typical (and, really, how many users won't be quite well served with 2 16x slots? Not many).
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
In under a minute, I easily found 6 slots, though 4 is more typical (and, really, how many users won't be quite well served with 2 16x slots? Not many).

Well this mobo is 3 slots PER CPU, the maximum per CPU. And its a truely huge mobo. That being said I wasn't aware someone actually made such a mobo.

I also suspect Intel has some specific types of users in mind
Who?
 

imported_ats

Senior member
Mar 21, 2008
422
64
86
So what? this does not justify inferior performance for more money.

If you don't understand the numbers are using, you'll never be able to do a correct comparison.

The numbers Intel are quoting are full span steady state I/O numbers and are in no way comparable to the partial span peak I/O numbers quoted by most vendors. Full span steady state numbers are generally 10x+ worse than partial span peak I/O numbers. For example, a SandForce bases 25xx series drives will only deliver ~7K steady state random I/O writes on a drive generally quoted at 60K+ random Write I/O. In contrast, each of the separate nand blocks on the 910 is capable of ~20K steady state random I/O writes.

Also, google went and showed that large arrays of consumer grade hardware make more sense then overpriced enterprise hardware.

No they didn't. They showed that with a lot of custom software and incredible scale, you can make cheap commercial hardware work. Leaving out the custom software and incredible scale is a mistake. There are only a handful of companies that have the scale to do what google does.
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
If you don't understand the numbers are using, you'll never be able to do a correct comparison.

The numbers Intel are quoting are full span steady state I/O numbers and are in no way comparable to the partial span peak I/O numbers quoted by most vendors.

The quoted performance numbers fall well below the INDEPENDENTLY TESTED steady state performance of other drive.

No they didn't. They showed that with a lot of custom software and incredible scale, you can make cheap commercial hardware work. Leaving out the custom software and incredible scale is a mistake. There are only a handful of companies that have the scale to do what google does.

The issue is that you contended that there is a size issue. If such small difference is size is an issue then you must be running a google sized operation.
 

Zxian

Senior member
May 26, 2011
579
0
0
Everyone here moaning about performance needs to look at the target audience and the one metric that actually matters for them: the 7-14 petabyte write endurance. Micron's P320 drive offers 25-50 petabytes of write endurance, but does so at $15/GB compared to the $5/GB for Intel. That combined with the half-height form factor (read: Fits in a 2U server) makes this an interesting product.

Any half decent DBA will be intrigued by the Intel 910. For all of you benchmarking e-peen stretchers, this wasn't meant for you, it's not designed for you, and you shouldn't buy it. Go look somewhere else for your toys to brag that you spent money on. ^_^
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
My 4 Intel X25-M G1's have used 1% of their write cycles. I have owned them for over 2 years. I am not worried that I don't have TRIM.

The G1 (and for that matter, intel drives in general) do not have aggressive GC, it conserves the drive's lifespan at the cost of performance.
Aggressive GC sacrifice lifespan in order to try to maintain new state performance, which they can't do as well as TRIM but they can get pretty close if aggressive enough.

Also, I am editing in a correction

x25m g1 didn't lose very much performance even without GC or TRIM. I don't remember how they did it, but that drive was very nearly the holy grail of ssd's for non-TRIM OS's (like win xp). The g2's were/are pretty awesome as well. Unfortunately, intel's more recent offerings based upon that same controller haven't been nearly as dominant, and it looks like they might even be switching to SF (!!) for the 330 series.
 

imported_ats

Senior member
Mar 21, 2008
422
64
86
The quoted performance numbers fall well below the INDEPENDENTLY TESTED steady state performance of other drive.

The only independently tested steady state performance with the correct conditions has been done by storage review and has the SF 25xx based drives at ~7k. A full spec OCZ Z-Drive R4 manages 48k with 8 SF 25xxx based controllers.

So no, the quoted performance numbers are ABOVE the INDEPENDENTLY TESTED steady state full span random write performance of other tested drives.