• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

New Intel SSD out in Feb

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Sequential read is at 450 MB/s and Write at 300 MB/s, IOPs are reported at 20k read and 4k write, which is worse than the current Postville models.

I really wish they'd focus on higher IOPS over STR. A 100k IOPS drive with 100MB/S W/R sequential would absolutely eat up these new drives.

Of course when it hit the review sites and everyone saw ATTO scores they'd promptly dismiss it as a POS. 🙄
 
well we just want 8-16 for a server with raid-10 - cut-through mode on the raid controller on a x16 slot.
 
well we just want 8-16 for a server with raid-10 - cut-through mode on the raid controller on a x16 slot.

I cannot wait for PCI-E 3.0 hosts with better SSD caps to come out. 48 32GB X-25Es in RAID0 is pretty funny though. (except when they saw the bill lol)
 
I really wish they'd focus on higher IOPS over STR. A 100k IOPS drive with 100MB/S W/R sequential would absolutely eat up these new drives.

Of course when it hit the review sites and everyone saw ATTO scores they'd promptly dismiss it as a POS. 🙄

I don't think it's possible to have a drive that does 100K IOPS (4K R/W) and have 100 MB/s sequential. But point taken. Such a drive is out and it's only on PCI-E ... I don't believe that it's possible to do that with non-RAID SATA given overhead, even on SATA 3.

Sequential reads (and writes) though are starting to matter as value SSDs are expanding into the 100+ GB range. I'm talking of course about streaming, game loading, caches for video editing, etc that do require high seq rates as well as random rates. We've gone past the point where one can only fit an OS on an "affordable" SSD.
 
Last edited:
I don't think it's possible to have a drive that does 100K IOPS (4K R/W) and have 100 MB/s sequential. But point taken. Such a drive is out and it's only on PCI-E ... I don't believe that it's possible to do that with non-RAID SATA given overhead, even on SATA 3.

Sequential reads (and writes) though are starting to matter as value SSDs are expanding into the 100+ GB range. I'm talking of course about streaming, game loading, caches for video editing, etc that do require high seq rates as well as random rates. We've gone past the point where one can only fit an OS on an "affordable" SSD.

Yeah I was using an (exaggerated) example but I'd prefer the mix to be more random. Of course enterprise and truly multiple task loads are considerably different than desktop loads.
 
I really wish they'd focus on higher IOPS over STR. A 100k IOPS drive with 100MB/S W/R sequential would absolutely eat up these new drives.

Of course when it hit the review sites and everyone saw ATTO scores they'd promptly dismiss it as a POS. 🙄

No it won't. It is only after certain point, Seq R/W dont matter as much.
 
sequential reads are nice for backing up the volume. but sequential writes not so much. it seems they have sequential read speed pretty fast. but line speed then becomes the problem esp with inline dedupe/compression even with 8 cores you probably are going to run out of cpu since inline dedupe in a enterprise (VMware) is running many backups at once to maximize deduplication (grouping backups of like machines together).
 
with dedup the problem is ram, not CPU power. cpu power for dedup is change, but RAM is about 2GB of ram per 1TB of data. Thrashing while deduping is not pretty.
 
well veeaam eats up 8 cores to dedupe with 2gb of ram and still is not full san wire speed (SAS/FC like speed). it does a good job. 2TB down to 300gb full bare metal (i don't do incrementals)
 
Report: Intel 510-series SSDs will arrive on March 1


A new line of Intel SSDs will debut on March 1, according to a story by VR-Zone. The site has spotted listings for Intel 510-series drives (reportedly code-named Elm Crest) at a British e-tailer, and it's posted a few specs, as well.

Let's start with the dirty details. VR-Zone claims 510-series SSD will have a 2.5" form factor, 6Gbps Serial ATA interface, maximum read speeds of 470MB/s, and top write speeds of 315MB/s. Intel will reportedly produce the drive using 34-nm flash memory, just like its existing X25-M offerings. If those read and write speeds are accurate, Intel could get a leg up over the competition from SandForce drives, which are typically rated for read and write speeds under 300MB/s.

The British e-tail listings linked by VR-Zone now point to X25-M drives, but hitting European price search engine Geizhals uncovers live entries for 120GB and 250GB 510-series drives. The lowest-capacity model is priced at €249, while the 250GB model will set you back €513. For reference, the 120GB X25-M can be found for €191 through the same price search engine, so you're looking at premiums of 30% and 170% for the two 510-series drives. Apply those premiums to the 120GB X25-M's Newegg price, and you end up with a rough idea of potential U.S. pricing: around $300 for the 120GB 510 drive and $620 for the 250GB offering.

http://techreport.com/discussions.x/20445
 
Hi there!

Plane for 2011:

  • Intel PC29AA31AA0??: Postville Refresh (SSD 320 Series = G3), 250/170 MB/s R/W
  • Intel EW29AA31AA0: enterprise, 535/500 MB/s, SAS 600

Here is short info about Intel SSD 320 Series (mine blog).

My opinion about OCZ 25nm performance issue.

320_series.png.scaled1000.png


02.png
 
A significant problem right now (at least with my G2 X25-M) is synchronous sequential writes blocking all other I/O operations on the SSD. In other words, when making a large sequential write, Windows essentially stutters until the write is completed. This doesn't happen with random async writes to any extent.

Does this problem still occur in today's Sandforce/Crucial/etc drives?

I really do wish someone tests this thoroughly. I don't have enough SSDs to do it.
 
SAS600 parts should be nice. 🙂

A significant problem right now (at least with my G2 X25-M) is synchronous sequential writes blocking all other I/O operations on the SSD. In other words, when making a large sequential write, Windows essentially stutters until the write is completed. This doesn't happen with random async writes to any extent.

Does this problem still occur in today's Sandforce/Crucial/etc drives?

I really do wish someone tests this thoroughly. I don't have enough SSDs to do it.

I don't see this but the 4GB cache in front of the array probably comes into play here. During some large writes there's evidence of "pumping" on cache commits but it's pretty much unavoidable until write penalties are lessened in future generations.
 
Yes, I think large write caches help significantly with this problem (unfortunately, also some risk of data loss). I believe the Intel drive so far doesn't use one?
 
Yes, I think large write caches help significantly with this problem (unfortunately, also some risk of data loss). I believe the Intel drive so far doesn't use one?

They don't on their SSDs. Without a capacitor to keep it refreshed the risk of data loss is real. My RAID arrays have a battery that keeps the cache retained for up to 72 hours in case the machine gets unplugged, etc.
 
flash back write cache has replaced battery back write cache thankfully. with the g6/g7 hp series you can switch up any time. thank god - when the batteries go bad and your raid-5 goes from 1000k/sec to 100k/sec your whole server starts failing. with super-capacitor cache in the drives themselves - caching on the controller seems wasteful - except you probably will have a hybrid of drives (ssd/sas/sata) in a single cage.
 
flash back write cache has replaced battery back write cache thankfully. with the g6/g7 hp series you can switch up any time. thank god - when the batteries go bad and your raid-5 goes from 1000k/sec to 100k/sec your whole server starts failing. with super-capacitor cache in the drives themselves - caching on the controller seems wasteful - except you probably will have a hybrid of drives (ssd/sas/sata) in a single cage.

I'm finding that cache on the controller is still beneficial as its latency and bandwidth are better. Faster disks will allow for larger caches though. Imagine having 16TB of cache, for example! :awe:
 
is this correct?:
SSDSA2xxxxxG3xx = Intel 320 series (25nm) 250/170 MB/s R/W
SSDSC2MH120A2K5, SSDSC2MH250A2K5 = Intel 510 ssd series (34nm) (R/W 450/300 280$ for 120gb)
These two 34nm based drives seem cheap as the current Intel 120GB (250/100) is priced at 230$ 😉

[So maybe G3 320 series 120GB will be at ~180$?🙂]
 
Last edited:
Also this chart, showing the possible impacts of cache on maximum latency (last one on the page): http://www.bit-tech.net/hardware/storage/2010/07/07/crucial-realssd-c300-256gb-ssd-review/6

I think that might be just one datapoint, but I think all of those drives that have well controlled maximum latencies do have a DRAM cache. Correct me on this though.

I'd just get the Crucial right now (with firmware fixes and all), but unfortunately I need to wait for a sale and can't justify over $2/GB.
 
Last edited:
Back
Top