Originally posted by: FishTankX
.....Yeah. 512K SDRAM modules. We're now at 2GB. And it's been *20* years. What does that tell you?
How big of a backup power unit would you need to sustain 30 watts of power to the harddrives for 5 seconds? That's alot of powerdraw. Not very practical for a battery, unless it was a NiCD. Then you would need chargers onboard. Yuck.
Originally posted by: MonkeyDriveExpress
Idontcare, what IF that power outage happens, and you lose that document you were working on for five hours? That said, get a UPS that will run your system for ten minutes and you'll be fine.
An 80% reduction in access time in 10 years, is really, really poor by computing standards. When comparing a Pentium 60 to a P4 3.2GHz, the difference is magnitudes greater than the difference between a 15k SCSI drive today, and a HD from 10 years ago. We know why that is (mechanical vs electronic), but the fact remains, that hard drives from a performance standpoint haven't come close to keeping up with the pace of improvement that the rest of industry has seen.
But there hasn't been an 80% reduction. Look at shuttleteam's benches. There has only been a ~50% reduction in ten years.An 80% reduction in access time in 10 years, is really, really poor by computing standards
(reduction of access time can only be achieved by increasing spindle speed)
Originally posted by: zephyrprime
But there hasn't been an 80% reduction. Look at shuttleteam's benches. There has only been a ~50% reduction in ten years.An 80% reduction in access time in 10 years, is really, really poor by computing standards
I think they should make the voice coil out of lithium instead of copper. Lithium has the most conductivity pound for pound. That should improve seek speeds significantly.
Originally posted by: Idontcare
Originally posted by: FishTankX
Originally posted by: Grminalac
Originally posted by: FishTankX
Originally posted by: Agamar
I think what will eventually happen is you will get SATA drives with ~64 to 128M of ram on them caching the data it *thinks* you want next. Already have a laptop drive out that has 16M, and my IBM Deskstar has 8 on it.
Can't use RAM caches that large because if you ever loose power you've just lost 64MB of data. Or 128MB of data.
For laptops, such large caches are practical because the chances of a power outage is 0 to none.
Well I do see this occuring, manufactuers will just have to include backup power units to allow the memory ample time to write to the hard drive or some other method. Imagine speed increases provided by caches of one gig greater in size.
How big of a backup power unit would you need to sustain 30 watts of power to the harddrives for 5 seconds? That's alot of powerdraw. Not very practical for a battery, unless it was a NiCD. Then you would need chargers onboard. Yuck.
The gain you would get with 1 gig of harddrive cache would be akin to the gains you would get with 1GB more of system RAM, probably less. You don't need that much cache, as the place where cache benefits is sequential reads/writes which tend not to be massive anyways. Cache is a huge benefit when you've got a lot of small files. Large files benefit more from transfer rate.
This seems to be a popular path of thinking as I see it continually repeated. What I haven't seen is anyone come to their senses and realize the only data or files a user will lose in the event of a power outage are those files which are either:
(a) open and modified at time of power outage, or
(b) modified and closed in the 2 seconds prior to this mysterious power outage that everyone is paranoid about happening every 30 minutes or so.
The size of the hard drive cache will not impact files and data lost by option (a) above. If your hard drive has 0MB of cache you will still lose all modifications to any instantaneously open files during the occurrence of a power outage.
Option (b), a closed file that has not had the modifications written to the physical platter, is impacted by the size of the hard drive cache. Given hard drive write speeds tend to be >30MB/s, the total quantity of data that would ever be at risk is quite small. Multiply the probability of these high-frequency power outages by the probability of having closed a file just prior to the power outage and you arrive at the total probability of a lost file. Of course, you will always lose the data and files from those which are open anyways.
So the question is this, in light of option (a) above, is the risk of extra data loss due to option (b) really worth all this paranoia about high cache laden hard drives? I don't think so.
Oh, and to answer the question regarding power draw of a hard-drive for 5s...Battery Data...an AA provides about 3 W-hr of power. Although not the correct voltage, the point is that it would be rather easy to supply enough power to your hard-drive to have it ride out a 5s power outage.
And no need to recharge, that is unless you are experiencing power outages every 30minutes, just replace the battery after you experience a series of frightening losses of power and you are tired of losing all your files that were open (not closed like the ones in your hard drive cache, they're safe now thanks to the battery) because you didn't buy a UPS to keep the rest of your computer up and running during power loss.
The reason we don't have 128MB caches on our hard drives is simple, it has nothing to do with FUD and everything to do with market economics. The day will come when hard drive manufacturers will have no choice but resort to placing 128MB of cache on their products to compete in the market, until then there is no need to release such a product and incur reduced profit margins relative to their competitors.
Having said that, don't stop spreading the FUD FishTankX, its humorous to read at times and we all need a little humor, don't we?
Originally posted by: zephyrprime
But there hasn't been an 80% reduction. Look at shuttleteam's benches. There has only been a ~50% reduction in ten years.An 80% reduction in access time in 10 years, is really, really poor by computing standards
I think they should make the voice coil out of lithium instead of copper. Lithium has the most conductivity pound for pound. That should improve seek speeds significantly.
Originally posted by: Idontcare
Originally posted by: FishTankX
.....Yeah. 512K SDRAM modules. We're now at 2GB. And it's been *20* years. What does that tell you? By the time we're up to 8GB SDRAMs (Finally getting to the harddrives of *6* years past) we'll be lucky to see 512MB MRAMs.
Do you actually work for Moto or have any clue about the market versus technology reasons MRAM is being produced in the current product mixes?
Originally posted by: FishTankX
That's technically not correct..
It depends on how often the I/O cache is flushed. Sometimes the OS just leaves something sitting in the IO cache waiting to be flushed, and while this can be tweaked, the fact is that any random loss of power in the event that a large chunk of data is sitting in the file cache, can result in data loss.
Such examples are probably rare in the user world, but the area where such caches are needed most, the server market, is sensitive to such issues.
And as a retalitaiton to your FUD remarks, there's another reason why a gig of cache simply wouldn't be benefitial.
I'd like to see you rebuke that one. I find your tone not to my liking.
Originally posted by: FishTankX
I merely quote what I read. And the largest MRAM chip so far is made on the .18 micron process and is 512K large.
In 2002 they had a 1Mb (128K) chip built on a . 60 micron process.
Now, did *you* have any idea of the state of affairs in the MRAM world?
Originally posted by: AndyHui
Heh. If you think hard drives haven't come far, I'd suggest you try going from a WD 10K Raptor to a Quantum Bigfoot 2.1GB.
Then we will all be able to hear your screaming from here.![]()
Did anyone else find this statement troubling? You're saying spindle speed has something to do with the size of a hard drive??????This, while independent from spindle speed, together with spindle speed determines the size of a hard drive, per platter.
As for the MRAM comments, I am not talking about bringing things 'To market'. I'm talking about *TECHNOLOGICAL LIMITATIONS*. Currently companies are *STRUGGLING* to get RAM, on their best processes, to break the 2 gigabit barrier at decent speeds. This is even for the server market which has a nearly infinite apetite for memory. Espically in super computers. (This is an area where market limitations simply *do not apply* and the best technology has to offer is always considered.) Why? Lithography limitations. Electrical limitations.
Did anyone else find this statement troubling? You're saying spindle speed has something to do with the size of a hard drive??????
Originally posted by: Jeff7181
Did anyone else find this statement troubling? You're saying spindle speed has something to do with the size of a hard drive??????This, while independent from spindle speed, together with spindle speed determines the size of a hard drive, per platter.
Even if we migrate to PCI express or 66MHz/64bit PCI the bottom line is the ram in the drive is horribly bandwidth limited. We can readily saturate a PCI bus with a simple RAID-0 array, why spend the money for solid state?
Areal density itself is the main determiner of the size of the platter, in GBs. But areal density has to decrease for spindle speed to increase.
Originally posted by: FishTankX
Originally posted by: Jeff7181
Did anyone else find this statement troubling? You're saying spindle speed has something to do with the size of a hard drive??????This, while independent from spindle speed, together with spindle speed determines the size of a hard drive, per platter.
Maybe that statment wasn't too clear.
What I meant is that the higher the spindle speed is, the lower the possible areal density is (Due to the fact that a higher areal density beyond a certain max would make the harddrive less than 100% acurate and consequently make it useless), and consequently, the lower the size of the platter. Areal density itself is the main determiner of the size of the platter, in GBs. But areal density has to decrease for spindle speed to increase.
Originally posted by: Jeff7181
Originally posted by: FishTankX
Originally posted by: Jeff7181
Did anyone else find this statement troubling? You're saying spindle speed has something to do with the size of a hard drive??????This, while independent from spindle speed, together with spindle speed determines the size of a hard drive, per platter.
Maybe that statment wasn't too clear.
What I meant is that the higher the spindle speed is, the lower the possible areal density is (Due to the fact that a higher areal density beyond a certain max would make the harddrive less than 100% acurate and consequently make it useless), and consequently, the lower the size of the platter. Areal density itself is the main determiner of the size of the platter, in GBs. But areal density has to decrease for spindle speed to increase.
That's assuming technology never improves... with better accuracy, there would be no need to decrease areal density in order to increase spindle speed.
Originally posted by: Pariah
Sorry, sport, there is simply nothing you can produce that shows there is some great demand for 4GB+ DIMMS. Sure, there may be isolated customers looking for such hardware, but again, if the market was large enough they likely would already be available. If anything, the market for larger DIMMs has stagnated recently as memory prices have dropped to the point where most people have more than they really need. Gone are the days when a memory upgrade pretty much guaranteed much improved system performance. Few customers need 2GB total, let alone over 8GB. If you do, there are solutions out there that support 16+GB.
Even if we migrate to PCI express or 66MHz/64bit PCI the bottom line is the ram in the drive is horribly bandwidth limited. We can readily saturate a PCI bus with a simple RAID-0 array, why spend the money for solid state?
The benefit of solid state is in microsecond access time, not raw throughput. 133MHz PCI-X has a theoretical 1GB/s throughput, which is ridiculously past what anyone would need for all but the most exotic of situations.
Areal density itself is the main determiner of the size of the platter, in GBs. But areal density has to decrease for spindle speed to increase.
What you are describing is platter capacity, not size, and no spindle speed does not dictate areal data density. In theory it should, it practice, it doesn't. The reason higher spindle speed drives have lower capacity platters is because the size (physical dimensions) of the platters are smaller.
