RAM vs SSD

tdg84

Banned
May 10, 2012
13
0
0
Sorry in advance for asking this type of question, its probably gonna cause a lot of debate or people will call me bad names D:

But looking at the speeds of RAM in terms of transfer rates, they appear to be faster than any SSD transer speeds. So why can't they make a lot more RAM in a computer and use the RAM as a hard drive? Sure you would still need a hard drive or SSD to store the data cause I know RAM is volitile and loses data when it loses power.

If you had enough RAM and used it like a hard drive wouldn't transfer rates be sky high? And the RAM would need to still write to the hard drive to save space, but is this at all possibe? Am I wrong in my thinking, or is there something I am missing about how this all works?
 

kbp

Senior member
Oct 8, 2011
577
0
0
Well to start how do you plan to keep power to it so it saves data?
 

Zap

Elite Member
Oct 13, 1999
22,377
7
81
There are a number of factors involved. First thing is how much RAM can your system take? Mine maxes out at 16GB, which isn't quite enough for a boot drive let alone leaving a few gigs free to actually operate with.

Second is cost, with RAM costing around $5/GB, versus SSDs approaching 50¢/GB and cheaper big HDDs at 5¢/GB.

Third is not just volatility of the storage, but needing time to copy everything in if you ever reboot.

Fourth is that I don't think (but could be wrong) any current RAM drive software supports running an OS from it.

Fifth, whatever theoretical bandwidth/speed the RAM can run at will actually be less in real life because the OS will be using some of it during normal operations.

So, you can spend $320 on top of whatever minimum cost required for a socket 2011 setup with 8 RAM slots just to get 64GB of RAM in the system. Knock off 8GB for normal use and you can end up with a 56GB RAM drive. If you can somehow run the OS from it, you will still need a small SSD to store it on.

Alternately you can use whatever you have right now, and spend $320 on four fast SATA 6G 120GB SSDs. Run them in RAID0 and have 480GB of non-volatile and super fast storage. Four Vertex 4 128GB SSD in RAID 0 gets 1527Mb/s read speeds. Is that not fast enough for you? PCIe SSD cards can break 2000Mb/s for a bit more dough.
 

Eeqmcsq

Senior member
Jan 6, 2009
407
1
0
It's an interesting idea, and yes, if you could run your entire boot partition off of RAM, read/write ops would be ridiculously quick and ridiculously fast.

But it's impractical because when you first boot up, you'd need to copy all of your boot partition from your permanent storage HDD/SDD to RAM. Then when you shut down, you'd need to copy your boot parition from RAM back into the HDD/SDD. Then there's the risk of power failure, and you'd lose all new stuff stored on your RAM copy of your boot partition.

However, I have done something like this before: When I want to test a Linux distro, I use a virtual machine, and I create a 6GB virtual HDD and store the virtual HDD file in RAM. Then I install the Linux distro on this virtual HDD. And despite being in a VM, the virtual HDD reads/writes pretty quickly.

When I'm done testing, I shut down the VM and copy the virtual HDD file to my HDD or SSD, so that I still have the installation when I want to come back and do some more tests.
 

ALIVE

Golden Member
May 21, 2012
1,960
0
0
There are a number of factors involved. First thing is how much RAM can your system take? Mine maxes out at 16GB, which isn't quite enough for a boot drive let alone leaving a few gigs free to actually operate with.

Second is cost, with RAM costing around $5/GB, versus SSDs approaching 50¢/GB and cheaper big HDDs at 5¢/GB.

Third is not just volatility of the storage, but needing time to copy everything in if you ever reboot.

Fourth is that I don't think (but could be wrong) any current RAM drive software supports running an OS from it.

Fifth, whatever theoretical bandwidth/speed the RAM can run at will actually be less in real life because the OS will be using some of it during normal operations.

So, you can spend $320 on top of whatever minimum cost required for a socket 2011 setup with 8 RAM slots just to get 64GB of RAM in the system. Knock off 8GB for normal use and you can end up with a 56GB RAM drive. If you can somehow run the OS from it, you will still need a small SSD to store it on.

Alternately you can use whatever you have right now, and spend $320 on four fast SATA 6G 120GB SSDs. Run them in RAID0 and have 480GB of non-volatile and super fast storage. Four Vertex 4 128GB SSD in RAID 0 gets 1527Mb/s read speeds. Is that not fast enough for you? PCIe SSD cards can break 2000Mb/s for a bit more dough.
actually windows ramdrise.sys
allows you to make a ramdrive up to 500mb (some say 483mb is the true limit)
make an img and when you boot the computer it makes the ramdrive loads the img to the ramdrive and then continues to boot to windows.
in the itnernet i found how you can do that for windows xp
but you need some files from other windows :-(
and you also have the option when you boot fromt he ramdrive to see or not the actuall boot drive
ramdrive would take the drive letter c:
boot time peopel call around 5 min to load a 483mb img to the ramdrive. imagine it can be lower with an ssd.
drawbacks
the size of the ramdrive.
changes can not be saved on the ramdrive
cons
the changes can not be saved on the ramdrive
the best speed you can get :)
 

Blain

Lifer
Oct 9, 1999
23,643
3
81
Sorry in advance for asking this type of question, its probably gonna cause a lot of debate or people will call me bad names D:

But looking at the speeds of RAM in terms of transfer rates, they appear to be faster than any SSD transer speeds. So why can't they make a lot more RAM in a computer and use the RAM as a hard drive? Sure you would still need a hard drive or SSD to store the data cause I know RAM is volitile and loses data when it loses power.

If you had enough RAM and used it like a hard drive wouldn't transfer rates be sky high? And the RAM would need to still write to the hard drive to save space, but is this at all possibe? Am I wrong in my thinking, or is there something I am missing about how this all works?
You can already use up to 192GBs of memory with Windows 7 Pro or Ultimate...
How much more do you need anyway? o_O
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
The pragmatic answers have been given. So, here's an attempt at why. Believe it or not, it's pretty simple today, thanks much to AMD and Microsoft. You can Google up physical RAM disks, and uses for in-memory RAM disks easily, though, so here's why it wasn't simple, and why they had more use in yesteryear, in an apology-like format.

Disclaimer: I'm bored, and waiting for a place that I need to make an appointment at ASAP to open for business :).

Today, several things make modern computer usage easy enough that RAM disks are not worth the time or hassle, except for special uses:
0. RAM disks are volatile.
1. SSDs are cheap enough.
2. Reasonably fast read-caching RAID cards are cheap enough (thank you LSI and Dell!).
3. RAM is cheap, relative to anyone's ability to use it.
4. All memory is OS-mappable memory, again. Yippee!

--

0. More an assumption of use than reason for their demise, RAM data goes bye-bye when it is not refreshed soon. This means you either need to save it off somewhere, eating up shutdown/bootup time, or have it battery-backed, and not leave the computer turned off for too long.

1. Anyone with a good SSD is at a point of diminishing returns. Your CPU, and other computer devices, are slowing down your boot time more than the SSD. While some folks have managed great optimized boot times, you'll note that many SSD users get very little benefit. It's not because the SSD is slow, but because of other devices on the system, and services being initialized at boot-time. SSDs have marginalized the need for RAM disks. RAID 0 can be used to speed up multiple SSDs.

2. If RAID 0 or single disk won't cut it, and you have the kind of need and budget that in prior years would push you to a specialized device, or PAE-range RAM disk, you can buy an LSI or Areca RAID controller, and get benefits from that. LSI's (Dell branded ones being common and affordable) are good enough and cheap enough that almost anyone with a real workstation budget can afford good RAID, if they need it.

All those GB/s don't buy you that much, if you're waiting more on other system bottlenecks (including the human nervous system).

3. OSes cache files in RAM. Windows, up through XP, still had file cache sybsystems from the NT days, when 32MB was huge, so it was kind of aggressive at getting rid of caches. Vista had that revamped, so Windows now works like *n*xes, keeping files around as long as reasonably possible. Accesses to these files only incur occasional access time updates, and barely touch any disk, except when modified a lot. This, combined with mainstream support for command queuing, previously a workstation/server-only storage feature, has allowed lots of cheap RAM to make up for much faster storage, like RAM disks, in the majority of cases. On my desktop, FI, I commonly see 3-5GB of RAM being used just for file caching, and most of the time I'm working in that cache. That's one of the reasons I don't see much value in an SSD on my desktop. Don't let anyone tell you only need 4GB of RAM. Those people haven't had the experience of using more for long periods of time. It's awesome, with a suitable OS (Windows Vista or newer, or any Linux or FreeBSD from the last 8-10 years).

A RAM disk in an older Windows version would be used as a hack for the so-so file caching, quite often. Also, swap isn't so necessary, today, with tons of RAM, so it can lazily be a backing cache, 99% of the time*. Way back when, you would face situations where you couldn't buy enough system RAM at any price, and had to find ways around that. Once your mobo was maxed out, then what? One such way as physical RAM disk products. Then, we got PAE, allowing up to 64GB RAM, and more RAM available, so you could use those upper 4GB regions for RAM disks, including the page file. PAE had to be used in explicit 4GB segments, so it was basically worthless to 99.99999% of applications, unless you put a RAM disk in the >4GB RAM space.

4. Here's where it gets really interesting. Windows splits user and OS and user space into virtual address slices of 2GB and 2GB, in IA32. Linux defaults to 1GB and 3GB**. What this means is that the OS can only control 2GB of memory, and an application can also only control 2GB of memory. So, if an application really needs more, you've got to jump through hoops. Now, your application, and the OS, both have some amount they need to reserve. Also, both will have to manage that memory, so free space to hold data reduces as actual usage increases.

Now, up until the point where you actually have 1.5-2GB of RAM, it doesn't matter. Since the virtual address space is larger than the physical memory in the system, the OS can put anything anywhere, at any time. And man, that seems like a lot of RAM for applications to not be using, doesn't it? Well, you'd be wrong. When your professional application opens its big image or model, that data is first managed by the OS. It opens the file, maps it, and then gives the application limited access to that mapped memory. Well, your web browser's cache is done the same way. So is every file your game accesses. I have game data folders bigger than 20GB, so that can be a lot of files opened, mapped, read, maybe edited, then closed and discarded.

So, as all that begins to approach the real available room inside the OS' space, it gets cramped, and it must swap this data out somewhere, like the page file. PAE with a RAM disk page file, or a physical RAM disk card, can help in those situations. Also, many programs with big data sets would have provisions to explicitly work on their own scratch space, since the OS was so cramped, which could also be benefited by RAM disks. Since the OS has limited room for itself to work with in normal RAM, you could hit limitations well before the machine's full limits. While rare in practice, in theory you could use only a few hundred MB in an application, have 4GB of RAM, and still need to swap***, due to the kernel running out of virtual address space within itself to work with! It actually happens on 32-bit Windows systems, too, just that most users figure it's something else, since the symptoms are similar to waking up a HDD, an AV program hogging IO, or an application blocking IO for a little bit, as it swaps out and/or defragments.

Oh, yeah, that's another little tidbit. RAM that gets fragmented needs to be remapped and compacted, so that allocations larger than the smallest page size don't get penalized. The details are very different, but it is not conceptually too different from common disk fragmentation. Normally, the OS does this a little at a time, as pages get freed. But, if the need for new allocations outpaces the OS' normal process, or if many mapped spaces stay accessed at random memory locations, which could prevent background collection from doing anything to them, things can slow down as it goes from doing it in the background to doing it in the foreground.

Well, in current x86_64 implementations, I believe the limits to OS-mappable memory are in the 2-3 digit terabytes, and the x86_64 spec allows for that space to be increased on newer hardware, so we're ultimately good up to around 2 petabytes, as it is defined today, which is hard to imagine using, right now.

As such, today, we'd just get more RAM and fast SSDs, and not worry about the miniscule performance boost a RAM disk might be able to offer, in the general case. The SSDs and/or RAID controller will be fast enough for storage, and the OS' file and IO caching will minimize the use of that storage for re-used data. With x86_64, the minor benefits are just not worth the major hassle and high cost, except for a few very rare situations.

* If you're paying attention, there is a contradiction, there. But, it's way beyond the scope of RAM v. HDD/SSD. Programs typically request much more memory than they will actually use, basing that amount on the maximum they might need, for various reasons. Swap that is never actually needed still counts for the system's total available memory, so the OS can guarantee big allocations. This is also the reason you need more RAM than you can actually make use of, if you wish to get rid of swap space, yet maintain system stability and reliability. The need to swap out to disk to have enough room to work with is the part that is not really needed, today.

** In retrospect, Windows' default was superior for desktop and mobile users, even though it was decided on long before that could be well predicted. Also, Windows has had the /3GB boot switch for ages, for applications that could use more space.

*** Maybe not to disk. I don't know, but it wouldn't surprise me if MS allowed system processes to be used as containers to swap out to user-space RAM. Even doing that would have a severe negative impact on performance, though, as each back-and-forth swap from userspace to OS to userspace would eat up anywhere from hundreds to tens of thousands of clock cycles, especially with multiple processors involved.
 
Last edited:

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
I technically have a solution. If you have several million dollaars you could always buy a gig of SRAM.
 

tdg84

Banned
May 10, 2012
13
0
0
Thanks for all the answers. I guess it was rather a rather silly question, I mean who would really need to transfer data all that fast anyways? Speaking of, does anyone actually use a SSD and transfer 500 Mbps anyways? I mean how many programs running simultaneously would it take to have the SSD running at the transfer rates it hits in the benchmark tests we see?
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
I technically have a solution. If you have several million dollaars you could always buy a gig of SRAM.

And a warehouse to store it. And your own power plant to keep it powered.

6T SRAM FTW.

Next best thing would be 1M1T STT-MRAM. SRAM speed, NAND Flash density and non volatility, DRAM/SRAM durability...

Then we can get rid of the concept of a "storage device". Your RAM and your data storage could be one and the same.

The holy grail of computer memory.
 
Last edited:

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
There actually have been a few products over the years that made DRAM into non-volatile RAM drives, using a battery to power the RAM when the computer is off. For example, Gigabyte's i-RAM, and ACards:

http://en.wikipedia.org/wiki/I-RAM
http://techreport.com/articles.x/16255/1

There may be others. But I am not aware of any recent models.

Today, I think the only thing that would make a non-volatile RAM-drive worthwhile (and then, only for people that need it) over an SSD would be if it has latency around 1 microsecond or less. But I don't think that can be achieved with SATA. So it would need to be a PCIe card. And then they would be competing with the likes of Fusion-IO, who already achieve very good latency with flash.

So I just don't think there is a market for a non-volatile RAM-drive any more.
 

Eeqmcsq

Senior member
Jan 6, 2009
407
1
0
Thanks for all the answers. I guess it was rather a rather silly question, I mean who would really need to transfer data all that fast anyways? Speaking of, does anyone actually use a SSD and transfer 500 Mbps anyways? I mean how many programs running simultaneously would it take to have the SSD running at the transfer rates it hits in the benchmark tests we see?

Nah, I don't think it was that silly of a question. It's the kind of thinking that leads to new and better ideas. It doesn't always work, but it doesn't hurt to ask.

To answer your questions: the average user will rarely hit the 500 MB/s sequential read/writes that the current gen's, unless they do tons of large file copies. But there are cases where 500 MB/s is still not enough: servers, heavy-duty photoshoppers who need scratch disks, and anyone who's working under stress and time pressure and need their computer/laptop to be responsive quickly.

For example, at my work, I am given a laptop that still runs WinXP, with an HDD. When I'm under time pressure, I hate having to wait for my laptop to boot up to the login screen, then wait again as my laptop loads the desktop, then the startup apps. Not to mention some McAfee piece of crap background service that is simultaneously crunching away at the HDD, which I can't disable it because I don't have Admin privileges. A current gen SSD would help. An SSD at the speed of RAM would be even more helpful.

Edit - And besides, sequential read/writes may have hit 500 MB/s now, but the 4K speeds aren't as fast, though they're way faster than HDDs.

Boy, would I love to boot my work laptop with 500 MB/s for the 4K read/writes.

3240d1297006013-new-ssd-ocz-agility2-60gb-extended-capacity-cdm-wd640-4k-aligned-ocz-ssd-vertex-120gb-dataram-ramdisk-results.jpg
 
Last edited:

kmmatney

Diamond Member
Jun 19, 2000
4,363
1
81
Well, I decided to give "FancyCache" a try (you can google it), and it can use a given amount of RAM to speed of hard disks. It can also use an SSD along side the RAM - the RAM acts as a level 1 cache, while the SSD acts as a level2 cache. I allotted 24GB of my SSD to be used with it, but only had 1 GB of RAM to spare.

It works sort of like Intels SRT, and I've noticed games are loading a bit faster, if I leave the computer running for a while. The cache is not persistent when you reboot, so it has to cache everything again when you restart. If you don't reboot much, you could assign a lot of RAM to it, and it will speed things up quite a bit.
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
LOL wouldn't affect boot time at all. As is with a single tier 1 3rd/4th gen SSD, majority if your boot time is programmed delays and device and network initialization timeouts, not storage IO.

Better off with 64GB RAM, let Windows cache, and back it by a SSD RAID0 boot volume, and use sleep.
 

lakedude

Platinum Member
Mar 14, 2009
2,778
529
126
If you had enough RAM and used it like a hard drive wouldn't transfer rates be sky high?
Yes

And the RAM would need to still write to the hard drive to save space, but is this at all possibe?
Yes, totally possible.

Am I wrong in my thinking, or is there something I am missing about how this all works?
You are not wrong.

I do exactly what you are talking about all the time. It is not expensive nor does it have anything to do with $ per GB or any such nonsense.

Windows is the problem. Windows is too big and too inefficient to load completely into a reasonable amount of RAM.

Puppy Linux on the other hand is less than 200MB and it is designed to run completely in RAM. You boot it up, it copies itself to RAM from your choice of an optical drive, usb stick, hard drive, or SSD (or even other stuff as well).

Say for example you boot from your CD (this is the slowest option BTW) in a couple minutes the whole thing is in RAM and you can remove the CD and the computer will still work and it will be lightning fast because the whole OS is in RAM.

If you were to boot from an SSD it would load in a blink.

FatDog-64, a 64 bit version of Puppy Linux:

sdfgxd.png
 

corkyg

Elite Member | Peripherals
Super Moderator
Mar 4, 2000
27,370
240
106
RAM stands for Random Access Memory. It is volatile. When power is removed, everything vanishes. Until that is changed, it is impractical. As long as the bootable data must be copied from something, it will boot only as quickly as that "something's" transfer rate.

So, maybe it might be practical with a different memory technology.
 

ALIVE

Golden Member
May 21, 2012
1,960
0
0
RAM stands for Random Access Memory. It is volatile. When power is removed, everything vanishes. Until that is changed, it is impractical. As long as the bootable data must be copied from something, it will boot only as quickly as that "something's" transfer rate.

So, maybe it might be practical with a different memory technology.

well i adore the volatite state of ram
there are uses that is so good to have that kind of storage
the temporery folder of internet explorer??? browsing is faster the hd is not getting a hammer and everything is gone at reboot
it is impractical for storage but it has it uses