The pragmatic answers have been given. So, here's an attempt at
why. Believe it or not, it's pretty simple today, thanks much to AMD and Microsoft. You can Google up physical RAM disks, and uses for in-memory RAM disks easily, though, so here's why it
wasn't simple, and why they had more use in yesteryear, in an apology-like format.
Disclaimer: I'm bored, and waiting for a place that I need to make an appointment at ASAP to open for business

.
Today, several things make modern computer usage easy enough that RAM disks are not worth the time or hassle, except for special uses:
0. RAM disks are volatile.
1. SSDs are cheap enough.
2. Reasonably fast read-caching RAID cards are cheap enough (thank you LSI and Dell!).
3. RAM is cheap, relative to anyone's ability to use it.
4. All memory is OS-mappable memory, again. Yippee!
--
0. More an assumption of use than reason for their demise, RAM data goes bye-bye when it is not refreshed soon. This means you either need to save it off somewhere, eating up shutdown/bootup time, or have it battery-backed, and not leave the computer turned off for too long.
1. Anyone with a good SSD is at a point of diminishing returns. Your CPU, and other computer devices, are slowing down your boot time more than the SSD. While some folks have managed great optimized boot times, you'll note that many SSD users get very little benefit. It's not because the SSD is slow, but because of other devices on the system, and services being initialized at boot-time. SSDs have marginalized the need for RAM disks. RAID 0 can be used to speed up multiple SSDs.
2. If RAID 0 or single disk won't cut it, and you have the kind of need and budget that in prior years would push you to a specialized device, or PAE-range RAM disk, you can buy an LSI or Areca RAID controller, and get benefits from that. LSI's (Dell branded ones being common and affordable) are good enough and cheap enough that almost anyone with a real workstation budget can afford good RAID, if they need it.
All those GB/s don't buy you that much, if you're waiting more on other system bottlenecks (including the human nervous system).
3. OSes cache files in RAM. Windows, up through XP, still had file cache sybsystems from the NT days, when 32MB was huge, so it was kind of aggressive at getting rid of caches. Vista had that revamped, so Windows now works like *n*xes, keeping files around as long as reasonably possible. Accesses to these files only incur occasional access time updates, and barely touch
any disk, except when modified a lot. This, combined with mainstream support for command queuing, previously a workstation/server-only storage feature, has allowed lots of cheap RAM to make up for much faster storage, like RAM disks, in the majority of cases. On my desktop, FI, I commonly see 3-5GB of RAM being used just for file caching, and most of the time I'm working in that cache. That's one of the reasons I don't see much value in an SSD on my desktop. Don't let anyone tell you only need 4GB of RAM. Those people haven't had the experience of using more for long periods of time. It's awesome, with a suitable OS (Windows Vista or newer, or any Linux or FreeBSD from the last 8-10 years).
A RAM disk in an older Windows version would be used as a hack for the so-so file caching, quite often. Also, swap isn't so necessary, today, with tons of RAM, so it can lazily be a backing cache, 99% of the time*. Way back when, you would face situations where you couldn't buy enough system RAM
at any price, and had to find ways around that. Once your mobo was maxed out, then what? One such way as physical RAM disk products. Then, we got PAE, allowing up to 64GB RAM, and more RAM available, so you could use those upper 4GB regions for RAM disks, including the page file. PAE had to be used in explicit 4GB segments, so it was basically worthless to 99.99999% of applications,
unless you put a RAM disk in the >4GB RAM space.
4. Here's where it gets really interesting. Windows splits user and OS and user space into virtual address slices of 2GB and 2GB, in IA32. Linux defaults to 1GB and 3GB**. What this means is that the OS can only control 2GB of memory, and an application can also only control 2GB of memory. So, if an application really needs more, you've got to jump through hoops. Now, your application, and the OS, both have some amount they need to reserve. Also, both will have to manage that memory, so free space to hold data reduces as actual usage increases.
Now, up until the point where you actually have 1.5-2GB of RAM,
it doesn't matter. Since the virtual address space is larger than the physical memory in the system, the OS can put anything anywhere, at any time. And man, that seems like a lot of RAM for applications to
not be using, doesn't it? Well, you'd be wrong. When your professional application opens its big image or model, that data is first managed by the OS. It opens the file, maps it, and then gives the application limited access to that mapped memory. Well, your web browser's cache is done the same way. So is every file your game accesses. I have game data folders bigger than 20GB, so that can be a lot of files opened, mapped, read, maybe edited, then closed and discarded.
So, as all that begins to approach the real available room inside the OS' space, it gets cramped, and it must swap this data out somewhere, like the page file. PAE with a RAM disk page file, or a physical RAM disk card, can help in those situations. Also, many programs with big data sets would have provisions to explicitly work on their own scratch space, since the OS was so cramped, which could also be benefited by RAM disks. Since the OS has limited room for itself to work with in normal RAM, you could hit limitations well before the machine's full limits. While rare in practice, in theory you could use only a few hundred MB in an application, have 4GB of RAM,
and still need to swap***, due to the kernel running out of virtual address space within itself to work with! It actually happens on 32-bit Windows systems, too, just that most users figure it's something else, since the symptoms are similar to waking up a HDD, an AV program hogging IO, or an application blocking IO for a little bit, as it swaps out and/or defragments.
Oh, yeah, that's another little tidbit. RAM that gets fragmented needs to be remapped and compacted, so that allocations larger than the smallest page size don't get penalized. The details are very different, but it is not conceptually too different from common disk fragmentation. Normally, the OS does this a little at a time, as pages get freed. But, if the need for new allocations outpaces the OS' normal process, or if many mapped spaces stay accessed at random memory locations, which could prevent background collection from doing anything to them, things can slow down as it goes from doing it in the background to doing it in the foreground.
Well, in current x86_64 implementations, I believe the limits to OS-mappable memory are in the 2-3 digit terabytes,
and the x86_64 spec allows for that space to be increased on newer hardware, so we're ultimately good up to around 2 petabytes, as it is defined today, which is hard to imagine using, right now.
As such, today, we'd just get more RAM and fast SSDs, and not worry about the miniscule performance boost a RAM disk might be able to offer, in the general case. The SSDs and/or RAID controller will be fast enough for storage, and the OS' file and IO caching will minimize the use of that storage for re-used data.
With x86_64, the minor benefits are just not worth the major hassle and high cost, except for a few very rare situations.
* If you're paying attention, there is a contradiction, there. But, it's way beyond the scope of RAM v. HDD/SSD. Programs typically request much more memory than they will actually use, basing that amount on the maximum they might need, for various reasons. Swap that is never actually needed still counts for the system's total available memory, so the OS can guarantee big allocations. This is also the reason you need more RAM than you can actually make use of, if you wish to get rid of swap space, yet maintain system stability and reliability. The need to swap out to disk to have enough room to work with is the part that is not really needed, today.
** In retrospect, Windows' default was superior for desktop and mobile users, even though it was decided on long before that could be well predicted. Also, Windows has had the /3GB boot switch for ages, for applications that could use more space.
*** Maybe not to disk. I don't know, but it wouldn't surprise me if MS allowed system processes to be used as containers to swap out to user-space RAM. Even doing that would have a severe negative impact on performance, though, as each back-and-forth swap from userspace to OS to userspace would eat up anywhere from hundreds to tens of thousands of clock cycles, especially with multiple processors involved.