When will SSD's replace spinning disks for mass storage?

daxzy

Senior member
Dec 22, 2013
393
77
101
Primarily talking about for NAS/Enterprise cold storage.

Right now the normal pricing for consumer 1TB SSD's are around $250, and the pricing for consumer 3/4TB 3.5" disks is $90/$125. So SSD's are still around 8X more expensive per TB, but that's come down a LOT in the recent years. Extrapolating it out, it seems around 2020-2022, consumer SSD's should be the same $/TB as 3.5" spinning disks.

My biggest gripe with SSD's for mass storage purposes is actually the speed... it's unnecessarily too fast. I just need disk space and really don't give a crap about R/W speeds, as long as they're > 100/50 MB/s (which is around decent eMMC level) in parity-2 configurations. I'd imagine I'm not alone in this, as most NAS drives seem to be 5400 rpm or an adaptive 7200 rpm configuration.

Would it be cheaper for manufacturers to make larger, but slower SSD's? Or is that just irrelevant?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Would it be cheaper for manufacturers to make larger, but slower SSD's?

Well, there is 3D QLC NAND.....so four bits per cell. This increases capacity per die another 33% over 3D TLC.
 

BSim500

Golden Member
Jun 5, 2013
1,480
216
106
Primarily talking about for NAS/Enterprise cold storage. Right now the normal pricing for consumer 1TB SSD's are around $250, and the pricing for consumer 3/4TB 3.5" disks is $90/$125. So SSD's are still around 8X more expensive per TB, but that's come down a LOT in the recent years. Extrapolating it out, it seems around 2020-2022, consumer SSD's should be the same $/TB as 3.5" spinning disks.

My biggest gripe with SSD's for mass storage purposes is actually the speed... it's unnecessarily too fast. I just need disk space and really don't give a crap about R/W speeds, as long as they're > 100/50 MB/s (which is around decent eMMC level) in parity-2 configurations. I'd imagine I'm not alone in this, as most NAS drives seem to be 5400 rpm or an adaptive 7200 rpm configuration.

Would it be cheaper for manufacturers to make larger, but slower SSD's? Or is that just irrelevant?
Realistically, I don't think they will "replace" them, at least not using current "flash" type technology in the near future. "Extrapolating it out" doesn't really work in a straight line any more than it does for modern CPU generational gains based on previous annual +30% boosts from the late 90's. Eg, Samsung have been the leader in 3D NAND, yet their prices of 850 / 950's aren't exactly plummeting compared to competitors previous 16nm MLC, nor are they rapidly stepping up the layers from 32/48 to 64/96/128/256, etc. Unless they do that, the process node "reset gains" from the 3D-ness are a one-shot deal, not something that can be factored in annually in a compounding manner.

Biggest problem for using SSD's as cold storage is the inherent flaw in flash technology itself (loses charge over time in an unpowered state 10-20x faster than a magnetic field on a HDD degrades, making them the least suitable storage option for mostly powered off cold storage of unchanging stale data). As cbn mentioned, QLC vs TLC improves cost at the expense of speed (as does TLC vs MLC) but the side effect also makes it far less durable for unpowered drives than MLC vs voltage drift over time, with almost no overhead in near-overlapping voltage states:-

SLC = 2 voltage states and 1x voltage threshold (100% vs 0v)

MLC = 4 voltage states and 3x voltage thresholds (100%, 66%, 33%, 0v)

TLC = 8 voltage states and 7x voltage thresholds (100%, 86%, 71%, 57%, 43%, 29%, 14%, 0v)

QLC = 16 voltage states and 15x voltage thresholds (100%, 93%, 87%, 80%, 73%, 67%, 60%, 53%, 47%, 40%, 33%, 27%, 20%, 13%, 7%, 0v)

16nm TLC already is at the point where barely 3x atoms per voltage state are holding the charge and where endurance has fallen from 100,000x to barely 1,000x P/E cycles. The problems we saw with Samsung 840's / BX200's falling to below 50MB read speeds as a result of struggling to read back what it wrote without extreme error correction were never actually fixed, merely a workaround that hid the issue by constantly rewriting data, "acceptable" for system drives booted daily, but that's no good at all for drives powered off for months at a time. 40nm 3D-NAND TLC has improved durability to match 16nm MLC, but as mentioned unless Samsung start upping the layers annually, it's a one-shot deal, and 40nm 3D-QLC is likely to be same or worse than what we saw with planar 16nm TLC Samsung 840's / BX200's...

Even today, the only SSD I'd even think about using for unpowered "cold storage" is the 850 PRO (40nm MLC), but the 13:1 cost vs 2TB HDD's makes that utterly pointless, especially since any serious backup strategy / NAS requires at least two drives. This is the number one point the "HDD's are dead solely because predicted future 1:1 pricing trend" crowd miss - you can get away with 1x SSD for a system drive, but anyone who runs a NAS / manually "staggers" 2-3 external backup drives A-B / A-B-C, etc, obviously needs more than one drive.

Unless there's some radical breakthrough that replaces flash technology with something similar but doesn't lose charge over time, I don't think even 10-16nm QLC will reach price parity before we're well into the realms of "junk durability" for unpowered offline storage.
 
Last edited:
  • Like
Reactions: cbn

whm1974

Diamond Member
Jul 24, 2016
9,436
1,569
126
Intel's and Micron's 3D XPoint may replace Flash in the near future at is more durable and denser.
 
Feb 25, 2011
16,992
1,620
126
For enterprise, it's coming pretty soon. (Enterprise HDDs were already pretty overpriced in terms of $/GB, and bolting together huge arrays of tiny HDDs for performance reasons isn't needed. So for anything but cold storage, Flash is getting competitive. Especially if your dataset/application will play nice with dedup/compression.) Another generation or two of NAND miniaturization or 3D-NAND stacking will probably give us all-Flash Enterprise storage arrays that are cheaper than an equivalent number of TBs worth of spinners. Especially if you consider the higher energy efficiency and TCO.

For individual/home users with data/media/backup drives and homebrew NAS units, where power bills don't matter, and HDDs are $30/TB instead of $100/TB, I think 3.5" spinning rust will last a bit longer, but data center tech will eventually trickle down. (And the HDDs will hopefully eventually become unavailable.)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Regarding individual/home users where power bills don't matter as much......I wonder if we will see some of the currently leading edge high performance data center hard drive tech trickle down?

This assuming SSDs obsolete 15,000 rpm SAS (2.5") hard drives first, then 10,000 rpm SAS (2.5"), then (7200rpm?)5200 rpm 2.5".....then finally 3.5".

Or is making those 15K and 10K spinners still rather expensive?

P.S. I am assuming the SAS interface could be replaced with a SATA interface easily enough.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
when theyre cheaper to make and sell.

Part of the reason I asked that question was because apparently WD Velociraptors were being used in Servers even though they were consumer drives.

.....And I am wondering if WD couldn't drop the price on them because of their market segmentation strategy?

.
 
Last edited:

Elixer

Lifer
May 7, 2002
10,371
762
126
Regarding individual/home users where power bills don't matter as much......I wonder if we will see some of the currently leading edge high performance data center hard drive tech trickle down?

This assuming SSDs obsolete 15,000 rpm SAS (2.5") hard drives first, then 10,000 rpm SAS (2.5"), then (7200rpm?)5200 rpm 2.5".....then finally 3.5".

Or is making those 15K and 10K spinners still rather expensive?

P.S. I am assuming the SAS interface could be replaced with a SATA interface easily enough.
Part of the reason why the higher RPM units aren't cheaper is because they don't make enough of the motors to drive the cost down.
All the other tech in the HDs are pretty much the same, except for the firmware.

The enterprise guys want lower power & cooling bills, that is the main driving factor for replacing a large amount of spinners with SSD (or other NAND based products), but, the vast majority still use spinners because of cost curve isn't quite there yet to be able to replace all spinners with SSDs.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Part of the reason why the higher RPM units aren't cheaper is because they don't make enough of the motors to drive the cost down.
All the other tech in the HDs are pretty much the same, except for the firmware.

Maybe they can get the 10,000 rpm (in a 2.5" form factor) with standard 5400 rpm motor by using helium?

AFAIK, we haven't seen helium in a 2.5" drive yet.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Intel's and Micron's 3D XPoint may replace Flash in the near future at is more durable and denser.

False. 3D Xpoint slots in between DRAM and Flash in performance/durability/price. Advantage is density is said to be 10x over DRAM cancelled out partially by yield issues and high cost of production. Vertically stacked flash has 3x density advantage.

If they radically overhaul OS and applications, and if it succeeds to be mass produced, perhaps in the future we can see some low capacity high performance systems replacing DRAM and Flash. I can think of phones, IoT devices.
 
Feb 25, 2011
16,992
1,620
126
Regarding individual/home users where power bills don't matter as much......I wonder if we will see some of the currently leading edge high performance data center hard drive tech trickle down?

Never.

How old are you? I had a 15k SCSI drive in my desktop computer (an old Power Mac) like 20 years ago. With a 4ms average seek time. (That statement may not be entirely true until 2018 or so... I don't remember the exact chronology.) This performance-oriented spinning rust stuff has been tapped out for decades.*

Nerds'll do it (when I was in undergrad, back in the Pleistocene, a lot of the computer science majors kitted out their desktops with server-grade Ultra-SCSI controllers and 10k or 15k drives in RAID-0.)

But for the most part, it's never been something most consumers were interested in. They're noisy and power hungry, and capacity is always small for the price. The performance improvement vs. "normal" desktop drives (~1/2 the access time and ~2-3x the sequential performance) isn't enough of an improvement to compensate for the downsides.

By the time laptops and their 4200rpm HDDs started outselling desktops around 2005, the fad was over. HEDT is a niche market.

*Also, cars existed 50 years ago could do a <5s 0-60 and an 11 second quarter mile. Brute force works.

Part of the reason I asked that question was because apparently WD Velociraptors were being used in Servers even though they were consumer drives.

.....And I am wondering if WD couldn't drop the price on them because of their market segmentation strategy?.

The Velociraptors basically were/are enterprise drives w/ consumer interfaces, like you're thinking of.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Actually on second thought, I can see flash replacing HDDs. The consumer PC market is essentially 200+ million sales of "zombie" computers. It's entirely secondary to phones. It'll die, and drag everyone involved with it.

Phones are all Flash. Heck, the future of PC may be all Chrome book like ultra cheap systems. And they can get away with 32GB of flash.
 
Feb 25, 2011
16,992
1,620
126
Maybe they can get the 10,000 rpm (in a 2.5" form factor) with standard 5400 rpm motor by using helium?

AFAIK, we haven't seen helium in a 2.5" drive yet.
That's not how helium works. It allows the r/w heads to be closer to the platter, which improves data density. This can improve sequential speed, because you read more data per spin, but spinning the platter faster would actually counteract the storage density effect, and probably be a net meh.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
That's not how helium works. It allows the r/w heads to be closer to the platter, which improves data density. This can improve sequential speed, because you read more data per spin, but spinning the platter faster would actually counteract the storage density effect, and probably be a net meh.

The Seagate Barracuda Pros (Helium filled high capacity 3.5" consumer drives) are currently 7200 rpm (rather than 5900 rpm as we normally see with non-helium high capacity 3.5") so there must be some kind of diminishing returns on placing the r/w heads closer to the platter.

If so, then maybe we will see 7200 rpm (rather than 10,000 rpm) Helium 2.5" drives.

^^^^^ I'd be interested in that for a laptop particularly if it has some Intel Optane memory as cache.
 

daxzy

Senior member
Dec 22, 2013
393
77
101
Biggest problem for using SSD's as cold storage is the inherent flaw in flash technology itself (loses charge over time in an unpowered state 10-20x faster than a magnetic field on a HDD degrades, making them the least suitable storage option for mostly powered off cold storage of unchanging stale data).

Sorry, I didn't mean cold storage (where a drive is powered off). I meant cool storage, where the data is available, but unlikely to be overwritten and may only be occasionally read. So it's placed on your slowest storage tier.

Use cases for home include media servers (you have a lot of movies/music/pictures that need massive amounts of storage, but you don't need to do heavy R/W). From the enterprise/cloud perspective, it's about equivalent to the scenario Facebook has with media (users upload of lot of content, but let's be honest, no one looks at grandma's pictures).

The enterprise guys want lower power & cooling bills, that is the main driving factor for replacing a large amount of spinners with SSD (or other NAND based products), but, the vast majority still use spinners because of cost curve isn't quite there yet to be able to replace all spinners with SSDs.

At work, we've done analysis of SSD vs 3.5" 7.2K rpm drives for mass storage in terms of power consumption. Idle power consumption heavily favors SSD's, but even in a cool storage scenario for enterprise/cloud, we're finding that there is almost always a read cycle going on for each drive (we're using parity-2). With that in mind, the Watt/TB actually favors 7.2K rpm drives.

For instance, an EVO 850 1TB has a read power consumption around 4W whereas a 8TB Seagate 7200 Enterprise has a read power consumption of 8W. The 3.5" spinning disk actually has a 4X lead in TB/watt when doing read cycles. For home use, I'd imagine the drives would be sitting idle (EVO 850 1TB has 50mW idle power consumption compared to around 6W for the Seagate 8TB Enterprise). If we were to repeat the experiment with an actual Enterprise SSD, like the Intel DC S3710 1.2TB, it'd be worse because that has a power consumption of about 6W active (Seagate 8TB Enterprise would have a 5X lead in TB/watt).
 

whm1974

Diamond Member
Jul 24, 2016
9,436
1,569
126
False. 3D Xpoint slots in between DRAM and Flash in performance/durability/price. Advantage is density is said to be 10x over DRAM cancelled out partially by yield issues and high cost of production. Vertically stacked flash has 3x density advantage.

If they radically overhaul OS and applications, and if it succeeds to be mass produced, perhaps in the future we can see some low capacity high performance systems replacing DRAM and Flash. I can think of phones, IoT devices.
Keep in mind the tech is new, and new tech is always costly at first. 3D Xpoint can also be used as an PCIe NVMe SSD. no need radically overhaul the OS applications.
 

BSim500

Golden Member
Jun 5, 2013
1,480
216
106
Use cases for home include media servers (you have a lot of movies/music/pictures that need massive amounts of storage, but you don't need to do heavy R/W).
The ultimate bottleneck for all that stuff though is both the speed at which you play it back (you can only listen to music / watch movies so fast) / and for home media server NAS's, the typical Gigabit network bandwidth (125MB/s). So in that case it's all down to cost which again, unless they start ramping up the number of vertical layers of 3D-NAND to 128-256 (it's taken two years just to go from 32 to 48 layers), it's going to be a long time before $1,500 4TB SSD's fall to anywhere near competing with sub $150 4TB WD Red's, even using undesirable QLC. If anything, reading back large sequential files over a network capped at 125MB/s even for SSD's, and typically requested at a max rate of 40Mbps / 5MB/s plays to a HDD's strength.
 

daxzy

Senior member
Dec 22, 2013
393
77
101
The ultimate bottleneck for all that stuff though is both the speed at which you play it back (you can only listen to music / watch movies so fast) / and for home media server NAS's, the typical Gigabit network bandwidth (125MB/s). So in that case it's all down to cost which again, unless they start ramping up the number of vertical layers of 3D-NAND to 128-256 (it's taken two years just to go from 32 to 48 layers), it's going to be a long time before $1,500 4TB SSD's fall to anywhere near competing with sub $150 4TB WD Red's, even using undesirable QLC. If anything, reading back large sequential files over a network capped at 125MB/s even for SSD's, and typically requested at a max rate of 40Mbps / 5MB/s plays to a HDD's strength.

One of our adjacent business teams is working on NBase-T, which should bring very affordable 2.5G and 5G ethernet (over traditional cat5e). But we're still far ahead of the capacity standard streaming bitrates. At best it'll just assist with file transfers.

Anyways, one thing that might be detrimental to spinning disks economy is that as their consumer operations wind down, they cannot rely on that for scaling. So whereas pricing for NAND appears to be on a decline, disk prices are staying mostly flat or maybe even up.

http://www.anandtech.com/show/11037/seagate-to-shut-down-one-of-its-largest-hdd-assembly-plants