SSD - When we hit the laws of diminishing returns

ksec

Senior member
Mar 5, 2010
420
117
116
I asked this previously in comments on SSD reviews but there were no answers.

Currently Sandforce based SSD gives us the best performance. A massive jump in Read / Write IOPS and Transfer Rate. As we move to next gen with SATA 3.0 should be able to double those performance numbers.

However, we have simply nailed the major bottleneck in our PC system in less then 3 years time. While from HDD to Sandforce SSD feels like you brought a brand new computer, moving from Sandforce SSD to say a RevoDrive, most of the review concentrate on numbers, but there were no mention of actual perception of speed.

Do we need even faster SSD? Will we see / feels the different? Are we now being bottlenecked by OS which has been programmed with slow moving HDD in mind for the past decades? What applications needs a 1GB + /s Read Write Transfer? Where do we see benefits drop off for consumers in terms of IOPS? Will Response time be more important in the future for SSD then Transfer rate?
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
If i hit the power switch on my PC and it basically instantly is ready for use (without hibernate) then I'm fully satisfied.
Yes SSD are much faster and boot times get pretty short compared to normal hdd's but isn't instant-on what we want in the end?

My pc now probably boots faster than my mobile phone. And it don't have one of these "I'm also a coffee machine" phones. :D
 

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
The larger picture of your question(s) has most of its implications in the enterprise space. Will I ever consider a PCIe SSD for my OS in my home computer? Only if they're at a price parity with SATA SSDs. The RevoDrives certainly are in the ballpark, but I don't like the no-TRIM thing. So I guess that's two conditions: price and TRIM.

If you shift to enterprise, then your questions almost aren't worth asking. Plenty of enterprise apps or systems are starved for IOPS and/or throughput that only high-end SANs or expensive (native) PCIe SSDs can deliver. There is a very clear need in that space.
 

Daemas

Senior member
Feb 20, 2010
206
0
76
besides what has been said above me, I would say a major limiting factor (for instant power on) is the motherboard. Specifically the BIOs. Shit takes forever, even with all the extra SATA/USB/Onboard Sound/IDE/Firewire/Parallel/Onboard NICs that I don't use that are turned off, still takes twice as long to run through the BIOs than load up the OS.
 

Zap

Elite Member
Oct 13, 1999
22,377
2
81
If i hit the power switch on my PC and it basically instantly is ready for use (without hibernate) then I'm fully satisfied.
Yes SSD are much faster and boot times get pretty short compared to normal hdd's but isn't instant-on what we want in the end?

Use S3 sleep. I know that the HTPC crowd has been doing it for a while and notebooks do it, but I only recently fiddled with it on desktops to great success. I've tried it on several systems (socket 1156, 775 and an Atom ITX) and they draw around 1-2W from the wall in sleep according to my Power Angel (like a Kill-A-Watt). Waking it up basically takes as long as your monitor switching modes or turning on.

My pc now probably boots faster than my mobile phone. And it don't have one of these "I'm also a coffee machine" phones. :D

No kidding. The Blackberry I had for work (and the Palm Treo before it) took ages to boot, literally several minutes. My 6 year old Nokia candy bar phone takes less than 10 seconds from power up to being able to make a call (unless it is still acquiring signal).
 

ksec

Senior member
Mar 5, 2010
420
117
116
Ok, let's look at instant on, If we are referring to true instant on from where we left off then SSD isn't the tech to bring it to us. We would need something like MRAM where Memory and Storage Space are united under the same place and it is non volatile. That is still another decades to go.

Apart from BIOS, OS is another thing that needs rework. Intel has developed a Linux with a PCI-E 1GB/s SSD that boots in less then 3 Sec. That is from Cold Boot to UI.

In terms of boot time SSD itsn't the bottleneck. It is more with BIOS , OS loading and scheduling.

Of coz enterprise wants as many IOPS as they could get. But for consumers, We are already not seeing any benefits for getting faster then a Dual Sandforce solution.

Example an App that takes 10s to start with HDD would only take 5s on SSD. But with Doubling the SSD speed we still need 4s for it to start. So clearly there must be some other bottleneck.
 

razel

Platinum Member
May 14, 2002
2,337
90
101
If i hit the power switch on my PC and it basically instantly is ready for use (without hibernate) then I'm fully satisfied.

Try standby, without the hybrid hibernate. Instant on, from standby with fans off, has been possible since roughly 2001. At least when I 1st was satisfied with it and have been enjoying instant on ever since.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Use S3 sleep. I know that the HTPC crowd has been doing it for a while and notebooks do it, but I only recently fiddled with it on desktops to great success. I've tried it on several systems (socket 1156, 775 and an Atom ITX) and they draw around 1-2W from the wall in sleep according to my Power Angel (like a Kill-A-Watt). Waking it up basically takes as long as your monitor switching modes or turning on.



No kidding. The Blackberry I had for work (and the Palm Treo before it) took ages to boot, literally several minutes. My 6 year old Nokia candy bar phone takes less than 10 seconds from power up to being able to make a call (unless it is still acquiring signal).

Well maybe have to google for it but with ssd it's not really an issue especially if fiddling is needed.
Yeah old mobiles boot much faster and it's enough for me. Don't need to surf, mail do other stuff with it. I'm next to a full blown pc most of the time anyway (work, home). Luckly don't need to travel alot.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I don't think the boot time is a big deal... most is still the mobo (bios bootup)...

the great thing about SSD is faster program installs (windows installs fast, windows updates install fast, programs install fast, games install fast...) and game level load times are blazing fast.

Games and programs will continue to increase in size, so continually increasing speed is beneficial.
 

FalseChristian

Diamond Member
Jan 7, 2002
3,322
0
71
I don't know why everyone is going gaga over SSD. They're storage capacity is much to small to make them useful. When we start seeing 1TB SSD that don't cost an arm and a leg then they will become mainstream. Until then 1-2TB regular hard-drives are the only way to go at the moment.
 

coolVariable

Diamond Member
May 18, 2001
3,724
0
76
I don't know why everyone is going gaga over SSD. They're storage capacity is much to small to make them useful. When we start seeing 1TB SSD that don't cost an arm and a leg then they will become mainstream. Until then 1-2TB regular hard-drives are the only way to go at the moment.

So wrong.
Have you ever seen a SSD in action?
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
I don't know why everyone is going gaga over hard drives. They're storage capacity is much to small to make them useful (compared to tape drives and floppies). When we start seeing 10MB HDDs that don't cost an arm and a leg then they will become mainstream. Until then several 1MB tapes or floppies are the only way to go at the moment.

Fixed.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I don't know why everyone is going gaga over SSD. They're storage capacity is much to small to make them useful. When we start seeing 1TB SSD that don't cost an arm and a leg then they will become mainstream. Until then 1-2TB regular hard-drives are the only way to go at the moment.

For the love of science it is "THEIR"

Their = Something that belongs to them. Their capacity, their house, their anger, their spelling mistakes.
They're = They are. They're wrong, They're the all state champions, They're incapable of telling the difference between their and they're.

Now, as for your argument... You have obviously never used an SSD... My 80GB intel G2 was the best upgrade I have ever gotten... Boot time speedup was small and irrelevant. Windows installs in 1/3rd the time, Windows updates install in a fraction of the time, programs install in a fraction of the time, games install in a fraction of the time, games load levels in a tiny fraction of the time...
it has actually made games with painfully long loading screens fun again (ex: Neverwinter nights 2)

They don't need to be mass storage devices. My gaming machine has an 80GB SSD + 640GB Spindle HDD. And I have a NAS (via gigE with jumbo frames) running raidz2 (raid6 zfs) on 5x750GB spindle drives...

SSDs are already mainstream, as a complementary system/games drive that is used alongside spindle drives (storage)... when you get large SSDs cheaply is when spindle drives will finally be KILLED by SSDs and nobody would be producing them anymore (since they would have become completely obsolete).
 

frostedflakes

Diamond Member
Mar 1, 2005
7,925
1
0
I don't know why everyone is going gaga over SSD. They're storage capacity is much to small to make them useful. When we start seeing 1TB SSD that don't cost an arm and a leg then they will become mainstream. Until then 1-2TB regular hard-drives are the only way to go at the moment.
Using an SSD for mass storage is just moronic. All it needs to be large enough to hold is your OS, programs, and games. For most people, 60-120GB is probably enough unless you have a ton of games installed. I can get by pretty well with a 60GB SSD, although 120/128GB would have been ideal if it was within my budget. Large files (music, movies, backups, etc.) are stored on a 5400RPM drive, because they don't benefit any from the higher IOPS provided by 7200RPM drives and SSDs. SSDs are good for speed and HDDs for space. Smart people use both and have the best of both worlds.
 
Last edited:

Eeqmcsq

Senior member
Jan 6, 2009
407
1
0
It never hurts to push technology forward and have more speed. There's always going to be somebody who needs it, usually a heavy-use workstation or server. But for the average consumer, I do think that any SSD nowadays is "quick" enough for their daily usage. So all that means is that those avg consumers who won't see a perceptible difference in their daily activities can stick to the "value" segment and look for the $$$/capacity drives.

Personally, I think the next bottleneck to the PC is Internet speeds. I'd like to see that get pushed up, so I can enjoy higher quality live sports streams so I don't have to guess what the player's trying to do when watching on a low quality, low resolution stream.
 

ksec

Senior member
Mar 5, 2010
420
117
116
Games and programs will continue to increase in size, so continually increasing speed is beneficial.
Arh, something i didn't considered before. Although most of todays Programs are large because they have some very pretty graphics included, Multi Languages Interface and Help files. Otherwise we are actually seeing the trend be reversed as people demand less bloated, fast and efficient programs. ( utorrent as example ).

Internet speed is not a problem on my side of the world where i can get 1000M for fairly cheap. With Local download speed exceed my HDD writing speed ( :sigh )

So we are back to software and may be CPU as the possible limitation. I remember Intel said AntiVirus were previously bounded by HDD speed is now bound by CPU speed on a SSD.
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Arh, something i didn't considered before. Although most of todays Programs are large because they have some very pretty graphics included, Multi Languages Interface and Help files. Otherwise we are actually seeing the trend be reversed as people demand less bloated, fast and efficient programs. ( utorrent as example ).

While its true that people look for faster, less bloated software, they look for feature-set as well (typically with a higher priority).

There are also a variety of ways in which utilizing extra storage can increase performance overall. Compression technology requires more CPU (and time) to decompress, if space is cheap enough you can have uncompressed graphics and audio for higher quality AND lower CPU/RAM usage.
Another example is http://en.wikipedia.org/wiki/Rainbow_table
There are other uses for space that can increase performance.

I do love my uTorrent, it is highly optimized to be both faster and smaller. This is due in large part to the goals (and skill) of its creators.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
SSDs are expensive if you compare with HDD. But it would make more sense to compare with the pricing of your RAM, then we see SSDs are roughly 6 times less expensive.

The real question is when NAND or other SSD technology gets its native interface to the chipset/DRAM or CPU directly. The Serial ATA interface is not really suitable for the parallel architecture of NAND. Using NAND-DIMMS much like DRAM memory might be possible in the future.

So a future image of memory in your system could be:

L1 Cache: 100GB/s
L2 Cache: 50GB/s
L3 Cache: 40GB/s
RAM: 10GB/s
SSD : 1GB/s
HDD : 100MB/s

Note that i used throughput; while latency would have been more appropriate. The HDD can do 100MB/s with sequential transfers, but under 1MB/s for strong random I/O.

The SSD has a very high capacity compared to its performance level. Imagine you buying 60GB RAM for just over 100 dollars.

Advanced systems like ZFS effectively use SSDs are memory when configured as L2ARC device or 'cache device'. With a native interface their speeds could exceed that of DRAM i believe.
 

ksec

Senior member
Mar 5, 2010
420
117
116
RAM: 10GB/s

Single Channel DDR3 1333 already exceed 10GB/s, not to mention 1600 and 1800.
And most system nowadays are equipped with Dual Channel Memory.

And why is a Serial Interface no good for parallel architecture? PCE-E is a serial interface as well. Most of today's Interface are moving to serial architecture, why isn't it good for SSD?
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Single Channel DDR3 1333 already exceed 10GB/s, not to mention 1600 and 1800.
Well i didn't try to get numbers correct; but to indicate the scale of several system components. Not many SSDs do 1GB/s also; only those with PCIe interface.

The memory speeds of average systems are about 10GB/s i think. Modern Core i7 cpus start at 12 and end at 24 according to this review; so i think taking 10GB/s as average for memory throughput is not that bad. You would get a totally different picture if you compare latencies, though.

And why is a Serial Interface no good for parallel architecture? PCE-E is a serial interface as well. Most of today's Interface are moving to serial architecture, why isn't it good for SSD?
Because it adds a lot of complexity and was totally built around a serial device like HDDs, which can only do one thing at a time on their physical medium. Though Serial ATA/300 added mandatory NCQ or Native Command Queueing support, which allows sending up to 32 I/O requests to a HDD or SSD, this was invented to allow HDDs less seek times by rearranging the I/O requests to a 'quicker path'. But it was not invented with the idea of true parallel storage devices like SSDs.

Thus, Serial ATA was great for low-performance storage devices but adds complexity and thus overhead on high-performance storage devices like SSDs. This becomes more apparent when you issue many small packages.

Also filesystems would benefit greatly if they handled NAND devices differently from mechanical HDDs. Filesystems often have optimizations that make sense on HDDs but much less so on SSDs. Also an intelligent filesystem would be able to write to different blocks so to give the SSD an easier job and it not degrading much in performance if any; so called copy-on-write filesystems.

Perhaps we'll see motherboards which have slots where you can insert NAND chips, with the NAND controller in the chipset. This is not going to happen anytime soon though; and likely NAND will be replaced by another technology in the future. But that does not change the fact that solid state memory storage is starting to replace mechanical storage and we need to rewrite some parts of both software and hardware that were specifically tuned towards handling mechanical storage, which is now hurting performance on solid state storage devices.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Single Channel DDR3 1333 already exceed 10GB/s, not to mention 1600 and 1800.
And most system nowadays are equipped with Dual Channel Memory.

And why is a Serial Interface no good for parallel architecture? PCE-E is a serial interface as well. Most of today's Interface are moving to serial architecture, why isn't it good for SSD?

eh, they seem to me to be constantly moving back and forth between parallel and serial... every time they make a revolutionary enough advancement that a single lane serial connector can replace the previous parallel, then later on a parallel connector based on the same tech is implemented. But sometimes not.

the intel lightpeak for example is parallel (4 seperate laser wavelengths multiplexed), I have read some work on making a parallel implementation of SATA, etc.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
the intel lightpeak for example is parallel (4 seperate laser wavelengths multiplexed), I have read some work on making a parallel implementation of SATA, etc.
I was sure the 10Gbps numbers were achieved without WDM? But anyways, WDM is not quite the same as parallel channels as in PATA for example.

Actually the biggest problem with parallel channels is that it's adding lots of complexity at the analog level which gets only worse with higher speeds.
 

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
The real question is when NAND or other SSD technology gets its native interface to the chipset/DRAM or CPU directly. The Serial ATA interface is not really suitable for the parallel architecture of NAND. Using NAND-DIMMS much like DRAM memory might be possible in the future.

So a future image of memory in your system could be:

L1 Cache: 100GB/s
L2 Cache: 50GB/s
L3 Cache: 40GB/s
RAM: 10GB/s
SSD : 1GB/s
HDD : 100MB/s
FusionIO is already doing this with their SSD cards. Their controller is a native PCIe -> NAND controller. Whereas the OCZ PCIe cards have traditional RAID and SATA controllers. I heard Micron has either a full team or multiple teams of engineers working on a native PCIe solution. The next big push in SSD tech is going to be fun to watch.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Actually the biggest problem with parallel channels is that it's adding lots of complexity at the analog level which gets only worse with higher speeds.

exactly, this is why parallel is only bothered with when it is absolutely necessary... whenever you can get away with a serial design, you use it. This is why the back and forth, we get bandwith hungry so we go parallel, we get some breakthrough or another increasing speed a lot, then we switch back to serial...

BTW, intel is multiplexing 4 lasers for the 10 MB/s.
You are right that it is not exactly parallel since its only one "cable"... its a bit harder to clearly say its one or the other.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
exactly, this is why parallel is only bothered with when it is absolutely necessary... whenever you can get away with a serial design, you use it. This is why the back and forth, we get bandwith hungry so we go parallel, we get some breakthrough or another increasing speed a lot, then we switch back to serial...

BTW, intel is multiplexing 4 lasers for the 10 MB/s.
You are right that it is not exactly parallel since its only one "cable"... its a bit harder to clearly say its one or the other.
Actually they reach the 10gbits with one cable and one diode (and one wavelength) if I read their research site and the AT article correctly. I assume you got the idea of the 4 lasers from one picture on the research site where they state "# Light Peak module with four fibers each capable of carrying 10Gb of data per second." (src)


And you're right in so far that while parallel data transfer with analog data is more or less unreasonable, WDM should work just fine for optical data and in a sense that's parallel data transfer..