Seriously, there is no viable alternative to SATA or SCSI?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

MDE

Lifer
Jul 17, 2003
13,199
1
81
Idontcare, what IF that power outage happens, and you lose that document you were working on for five hours? That said, get a UPS that will run your system for ten minutes and you'll be fine.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
Originally posted by: FishTankX
.....Yeah. 512K SDRAM modules. We're now at 2GB. And it's been *20* years. What does that tell you?

Quite a few things, most importantly, no demand, and no software/hardware support. If there was a large enough market to make financial sense larger capacity products would exist.

Mass storage on solid state drive doesn't make much sense anyway. What's the purpose of microsecond access time for 200GB of mp3's and Divx files? There isn't a need for anything larger than around 20GB for highspeed storage for almost any user, and SSD drives of that size already exist, with a "modest" price increase over traditional platter storage if you're Bill Gates. Tack on another 2GB of RAM and you'll be set for years to come.

How big of a backup power unit would you need to sustain 30 watts of power to the harddrives for 5 seconds? That's alot of powerdraw. Not very practical for a battery, unless it was a NiCD. Then you would need chargers onboard. Yuck.

SSD drives draw around 5W of power, not 30. Even today's 15k drives only draw between 10-15W during activity. Nonvolatile solid state drives do exist as well.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: MonkeyDriveExpress
Idontcare, what IF that power outage happens, and you lose that document you were working on for five hours? That said, get a UPS that will run your system for ten minutes and you'll be fine.

MonkeyDriveExpress, that's actually my point. I'm guessing I wasn't very clear in my post.
 

sharkeeper

Lifer
Jan 13, 2001
10,886
2
0
The topic here informs me that this is an interface debate which it clearly is not.

Anyways, let's go back ten years ago. I have a benchmark of a (then) top of the line Maxtor 7345AT hard disk. It's an IDE disk spinning at 4500 RPM with a capacity of 345 MEGA bytes. :)

ATTO

HD Tach

This drive was tested on an up to date system so the CPU usage is very sick! No DMA folks, didn't exist then.

Of course we know what the latest SCSI disks and Raptors are capable of.

Still think hard drives haven't advanced much?

Cheers!
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
An 80% reduction in access time in 10 years, is really, really poor by computing standards. When comparing a Pentium 60 to a P4 3.2GHz, the difference is magnitudes greater than the difference between a 15k SCSI drive today, and a HD from 10 years ago. We know why that is (mechanical vs electronic), but the fact remains, that hard drives from a performance standpoint haven't come close to keeping up with the pace of improvement that the rest of industry has seen.
 

sharkeeper

Lifer
Jan 13, 2001
10,886
2
0
An 80% reduction in access time in 10 years, is really, really poor by computing standards. When comparing a Pentium 60 to a P4 3.2GHz, the difference is magnitudes greater than the difference between a 15k SCSI drive today, and a HD from 10 years ago. We know why that is (mechanical vs electronic), but the fact remains, that hard drives from a performance standpoint haven't come close to keeping up with the pace of improvement that the rest of industry has seen.

It's important to remember that mechanical specification alone (reduction of access time can only be achieved by increasing spindle speed) does not scale proportionately to actual performance. It's possible to get identical access times as this dinosaur with current disks by using acoustic management utilities. The responsiveness of the drive is still quite good. (good enough that most desktop users would never know unless someone pointed it out!)

Access times between the first generation 15k cheetah and the current are (fairly) insigificant yet the newest drives perform remarkably better. Firmware algorithms, cache lines, and firmware processing power have advanced considerably over the years.

If you look at current recording technology including R/W head technology and media, significant advances have been made.

Has it "kept up" with CPU and memory speed advances? It's really too hard to tell since no metric to make such a comparison exists.

Perhaps I don't see it as much since the storage solutions I work with on a day to day basis have performance that blurs the definitive line between SS and mechanical drives...

Cheers!
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
An 80% reduction in access time in 10 years, is really, really poor by computing standards
But there hasn't been an 80% reduction. Look at shuttleteam's benches. There has only been a ~50% reduction in ten years.

I think they should make the voice coil out of lithium instead of copper. Lithium has the most conductivity pound for pound. That should improve seek speeds significantly.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
(reduction of access time can only be achieved by increasing spindle speed)

It would be more correct to say the only way to reduce average latency is to increase spindle speed. Faster head movement, command queing and other head movement algorithms, and decreased platter diameter can also reduce average access time. I wasn't using access time as representative of HD performance in general, just using it as an example of one aspect of HD performance that has not kept up with the times. HD manufacturers have developed other "tricks" to try and mask these shortcomings (caching, higher data density, head seeking algorithms, etc), but there is only so much they can do. Again, compare the overall performance between today's CPU and 10 year old CPU's. Even more drastic, compare video cards of today to ones 10 years ago. Then compare HD's, the difference isn't nearly as great.

Originally posted by: zephyrprime
An 80% reduction in access time in 10 years, is really, really poor by computing standards
But there hasn't been an 80% reduction. Look at shuttleteam's benches. There has only been a ~50% reduction in ten years.

I think they should make the voice coil out of lithium instead of copper. Lithium has the most conductivity pound for pound. That should improve seek speeds significantly.

A drop from 25.2ms to 5.6ms is an 80% (rounded, 77.8%) reduction.
 

FishTankX

Platinum Member
Oct 6, 2001
2,738
0
0
Originally posted by: Idontcare
Originally posted by: FishTankX
Originally posted by: Grminalac
Originally posted by: FishTankX
Originally posted by: Agamar
I think what will eventually happen is you will get SATA drives with ~64 to 128M of ram on them caching the data it *thinks* you want next. Already have a laptop drive out that has 16M, and my IBM Deskstar has 8 on it.


Can't use RAM caches that large because if you ever loose power you've just lost 64MB of data. Or 128MB of data.

For laptops, such large caches are practical because the chances of a power outage is 0 to none.


Well I do see this occuring, manufactuers will just have to include backup power units to allow the memory ample time to write to the hard drive or some other method. Imagine speed increases provided by caches of one gig greater in size.

How big of a backup power unit would you need to sustain 30 watts of power to the harddrives for 5 seconds? That's alot of powerdraw. Not very practical for a battery, unless it was a NiCD. Then you would need chargers onboard. Yuck.

The gain you would get with 1 gig of harddrive cache would be akin to the gains you would get with 1GB more of system RAM, probably less. You don't need that much cache, as the place where cache benefits is sequential reads/writes which tend not to be massive anyways. Cache is a huge benefit when you've got a lot of small files. Large files benefit more from transfer rate.

This seems to be a popular path of thinking as I see it continually repeated. What I haven't seen is anyone come to their senses and realize the only data or files a user will lose in the event of a power outage are those files which are either:

(a) open and modified at time of power outage, or
(b) modified and closed in the 2 seconds prior to this mysterious power outage that everyone is paranoid about happening every 30 minutes or so.

The size of the hard drive cache will not impact files and data lost by option (a) above. If your hard drive has 0MB of cache you will still lose all modifications to any instantaneously open files during the occurrence of a power outage.

Option (b), a closed file that has not had the modifications written to the physical platter, is impacted by the size of the hard drive cache. Given hard drive write speeds tend to be >30MB/s, the total quantity of data that would ever be at risk is quite small. Multiply the probability of these high-frequency power outages by the probability of having closed a file just prior to the power outage and you arrive at the total probability of a lost file. Of course, you will always lose the data and files from those which are open anyways.

So the question is this, in light of option (a) above, is the risk of extra data loss due to option (b) really worth all this paranoia about high cache laden hard drives? I don't think so.


Oh, and to answer the question regarding power draw of a hard-drive for 5s...Battery Data...an AA provides about 3 W-hr of power. Although not the correct voltage, the point is that it would be rather easy to supply enough power to your hard-drive to have it ride out a 5s power outage.

And no need to recharge, that is unless you are experiencing power outages every 30minutes, just replace the battery after you experience a series of frightening losses of power and you are tired of losing all your files that were open (not closed like the ones in your hard drive cache, they're safe now thanks to the battery) because you didn't buy a UPS to keep the rest of your computer up and running during power loss.

The reason we don't have 128MB caches on our hard drives is simple, it has nothing to do with FUD and everything to do with market economics. The day will come when hard drive manufacturers will have no choice but resort to placing 128MB of cache on their products to compete in the market, until then there is no need to release such a product and incur reduced profit margins relative to their competitors.

Having said that, don't stop spreading the FUD FishTankX, its humorous to read at times and we all need a little humor, don't we?

That's technically not correct..

It depends on how often the I/O cache is flushed. Sometimes the OS just leaves something sitting in the IO cache waiting to be flushed, and while this can be tweaked, the fact is that any random loss of power in the event that a large chunk of data is sitting in the file cache, can result in data loss.

Good examples: File rendering programs, Kazaa, Edonkey etc...

Such examples are probably rare in the user world, but the area where such caches are needed most, the server market, is sensitive to such issues.

And as a retalitaiton to your FUD remarks, there's another reason why a gig of cache simply wouldn't be benefitial.

You would have to completly rewrite file caching alogrithms.

Trying to tack on a gig cache on a harddrive is akin to tacking on a 2.4GHZ FSB onto an AthlonXP. Sure, it might increase performance, but the performance gain is completly out of proportion to the cost involved with it. Because the AthlonXP simply doesn't need it. Neither do the majority of the harddrives. The goal of cache is to keep the small frequently used files in a cache that can be quickly fed to the computer, or to allow the harddrive when rounding up a highly fragmented file to first dump the fragmented parts to cache instead of having to send them off to the system every time. Writing to a local cache is probably alot faster and alot less CPU intensive than say, writing to RAM every time.

I'd like to see you rebuke that one. I find your tone not to my liking.
 

FishTankX

Platinum Member
Oct 6, 2001
2,738
0
0
Originally posted by: zephyrprime
An 80% reduction in access time in 10 years, is really, really poor by computing standards
But there hasn't been an 80% reduction. Look at shuttleteam's benches. There has only been a ~50% reduction in ten years.

I think they should make the voice coil out of lithium instead of copper. Lithium has the most conductivity pound for pound. That should improve seek speeds significantly.

I think right now the problem is the structural integrity of the discs. A disc can only go so fast before it flies apart. But I may be wrong.

Another problem would be space compromises, as when spindle speeds increase you need to decrease density to keep data integrity in tact.

 

FishTankX

Platinum Member
Oct 6, 2001
2,738
0
0
Originally posted by: Idontcare
Originally posted by: FishTankX
.....Yeah. 512K SDRAM modules. We're now at 2GB. And it's been *20* years. What does that tell you? By the time we're up to 8GB SDRAMs (Finally getting to the harddrives of *6* years past) we'll be lucky to see 512MB MRAMs.

Do you actually work for Moto or have any clue about the market versus technology reasons MRAM is being produced in the current product mixes?

AHEM! And I quote.

Motorola produces *WORLD'S FIRST 4Mb (512K) MRAM chip. Dated OCT 28 2003.

Motorola Produces World's First 4 Mbit MRAM Chip

file photo of an MRAM chip
Chandler - Oct 28, 2003
Motorola, Inc. has produced the world's first 4 megabit (Mbit) magnetoresistive random access memory (MRAM) chip. Select customers are currently evaluating samples of this advanced memory technology. This technology milestone is further evidence of the viability of MRAM, which potentially can replace multiple existing memory technologies.
"The fact that Motorola has demonstrated a 4Mb MRAM chip based on a 0.18-micron technology is great news for the industry," said Bob Merritt, vice president of Emerging Technologies with Semico Research Corporation.

"This is a significant advancement since Motorola's June 2002 demonstration of a 1Mb MRAM using 0.60-micron technology. That's like stepping over four or five process generations in little more than a year."

MRAM combines non-volatility with incredible endurance and speed. In many appliances, electronics systems, and consumer devices, MRAM could replace multiple memory devices. Designers may benefit from reduced system complexity, lower overall system cost, and improved performance.

MRAM's reliability and long-life may make it well-suited for applications in harsh environments or requiring long system life such as automotive and industrial. Recognizing MRAM's potential, Honeywell recently licensed Motorola's MRAM technology for military and aerospace applications.

"For the past several years, Motorola has led the industry in MRAM development with 256kb, 1Mbit and now 4Mbit devices," said Dr. Claudine Simson, chief technology officer, Motorola's Semiconductor Products Sector.

"Our 4Mb MRAM chip not only showcases our technology, it will accelerate the industry's acceptance of MRAM technology. We've made significant progress toward establishing a solid MRAM manufacturing technology capability. We're now working with lead customers on performance refinements for future market introduction and broader sampling next year."

MRAM could initially enter the market in applications that require speed, reliability and low power. MRAM is suited for applications that value the ability to do high-performance writes with unlimited read-write endurance, low write energy and/or data retention with no energy. In several instances, MRAM could lower the number of component parts and provide more reliability and competitive system cost to the customer.
_____________________________________________________________________________________________

I merely quote what I read. And the largest MRAM chip so far is made on the .18 micron process and is 512K large.

In 2002 they had a 1Mb (128K) chip built on a . 60 micron process.

Now, did *you* have any idea of the state of affairs in the MRAM world?


 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: FishTankX

That's technically not correct..

I'm sorry, you'll need to be a little less obscure in referencing the specific portion of my post that you feel is not technically correct. From what I can tell, none of your post actually refutes any of the content in mine.

It depends on how often the I/O cache is flushed. Sometimes the OS just leaves something sitting in the IO cache waiting to be flushed, and while this can be tweaked, the fact is that any random loss of power in the event that a large chunk of data is sitting in the file cache, can result in data loss.

I don't see how that negates the point that any file NOT closed prior to this FUD spurring power loss will also be lost, regardless of cache size on your hard drive. You are so busy arguing for these poor cached but unflushed files and meanwhile all modifications to your open files are lost and yet that doesn't seem to concern you. Very odd. Bottom line is cache is a fact of life, and increasing the size of it does not all of a sudden create a Heaviside Step Function in the risk of losing data. If you are really so concerned about these freakishly frequent power outages then purchase a UPS. I'm surprised you haven't embarked on a "cosmic rays will destroy your data whilst remaining in the ram" crusade. Sure it happens, but what is the real impact in terms of man-hours lost and will such impact ever drive economics and market decisions regarding high-cache laden hard drives for the common user...I doubt it.

Such examples are probably rare in the user world, but the area where such caches are needed most, the server market, is sensitive to such issues.

Regarding your comment on the server market...anything involving arenas sensitive to data such as server environments will be UPS enabled. If the hardware owners do not find their server investment warrants a UPS then it is doubtful they are concerned about fast hard drives or their retaining their data in the first place. I fail to determine how you felt this part of your post is relevant to the discussion of the relevance of large cache hard drives.

And as a retalitaiton to your FUD remarks, there's another reason why a gig of cache simply wouldn't be benefitial.

And it keeps getting weirder. For a moment consider that almost no company would bring a product to market without having optimized it on some level prior to market introduction. If they didn?t then what would be the point. Consider the WD Raptor. Remember how horrible those benchmarks were initially, prior to release of the mass-market version? OK, now stick with me here, the logic you keep employing seems to require the reader to believe a company out there would hire employees stupid enough to consider doing exactly what you propose. AMD will never release a CPU that requires a 2.4GHz FSB unless that product was either re-optimized for it, or sufficiently benefited from the new FSB due to inherent limitations in the existing one so as to not need internal optimizations. Either way, even if no benefit were to be had, if AMD can implement 2.4GHz FSB then there is no longer a technical argument against its possible creation. See you need to decide what you don?t like about high cache hard drives because you are all over the map here. Either you don?t like them because you believe they can?t be technically created, or you don?t like them because you believe it isn?t economical to make them, or something. For godsake, please pick a reason and stick to it though.

And again, you keep wanting to believe this is an argument on the technical fallacies of large cache hard drives when, and again this is important, the cache is driven by product competitiveness and economics. Whether you and I are completely correct or incorrect on the merits of a 16MB or 128MB cache drive doesn't change the fact that market dynamics dictated hard drives at one point in time to have 512KB cache, to be replaced by hard drives with 2MB of cache, only to be replaced by hard drives with 8MB of cache. Do you really believe your line of FUD regarding risk of data loss and everything being blindly rushed to market without optimization is what kept these hard drives from coming to market any sooner or later than they did? Please.

I'd like to see you rebuke that one. I find your tone not to my liking.

I could care less.

I don't make my posts here just for your benefit, but for those who are reading this thread and may actually want to walk away without running for fear of large cached hard drives showing up at newegg.com next year.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: FishTankX
I merely quote what I read. And the largest MRAM chip so far is made on the .18 micron process and is 512K large.

In 2002 they had a 1Mb (128K) chip built on a . 60 micron process.

Now, did *you* have any idea of the state of affairs in the MRAM world?

Actually I have no issue with what Moto has claimed to have accomplished nor your interpretation of their claims. The issue with your post is that you somehow feel the technological and economical forces that spurred increasing DRAM bit densities has anything to do with MRAM density and the potential ramping thereof.

You did not merely quote what you read. I'm sorry you feel differently.
 

Chaotic42

Lifer
Jun 15, 2001
34,864
2,028
126
Originally posted by: AndyHui
Heh. If you think hard drives haven't come far, I'd suggest you try going from a WD 10K Raptor to a Quantum Bigfoot 2.1GB.

Then we will all be able to hear your screaming from here. :p

For real.

We have an old ~400MB Quantum here at work. We use it for holding audio. Sometimes we have to clip edit, which is just bringing up the waveform and editing it like you would in Cool Edit.

I once made the mistake of trying to clipedit a 35 minute file.

I keep asking for a SCSI setup or at least a nice Raptor. I don't see it happening.

 

FishTankX

Platinum Member
Oct 6, 2001
2,738
0
0
I'm begining to see some of the points in your posts, like the server sector and UPS's, and the interaction of file systems, and I retract the statments about power loss. The only reason why I emphasize that the large caches might promote file-loss is because I have personally been the victim of such things. Hell, microsoft even gives you a warning. Look in the policies tab of any harddrive. It says specifically 'Improves performance but power outage of requipment failure might result in data loss', of which I have been the personal victim.

Before I disabled write caching, I had good performance on my harddisks. I turned it off, (Which you might view as absurd or stupid) because before I turned it off whenever my comptuer would randomly reboot or BSOD or something I would loose all of my Kazaa files in download. This eventually drove me crazy because I was having a rash of power outages in my area, and most people do not have UPS's. The larger your cache is, the higher the chance you have that a stream in active modification will be destroyed. How is my personal experience FUD? *sigh*

As for the MRAM comments, I am not talking about bringing things 'To market'. I'm talking about *TECHNOLOGICAL LIMITATIONS*. Currently companies are *STRUGGLING* to get RAM, on their best processes, to break the 2 gigabit barrier at decent speeds. This is even for the server market which has a nearly infinite apetite for memory. Espically in super computers. (This is an area where market limitations simply *do not apply* and the best technology has to offer is always considered.) Why? Lithography limitations. Electrical limitations.

MRAM is significantly harder to manufacture (Entirely new technology that theoretically shouldn't be any easier to manufacture than DRAM, the ultimate cheap electrical mass storage) and I doubt the shrinking of MRAM could ever outpace the shrinking of DRAM. This means that you're 20GB MRAM drive is a long way off (Atleast 8 years) at which point storing ano operationg system and program apps on such a device would be entirely impractical. Not need more than 20GB? Hah! People said the same thing about 2 GB drives about 8 years ago. And now even the OS itself (not counting program files) takes up atleast 1.2GB, while the next version of longhorn could eclipse 2GB. Secondly, MRAM would not be significantly cheaper than an equal sized pool of DRAM. Thus, it is inefficent even for OS storage. This is all from a technological standpoint. Not from a market standpoint. The fact is that it seems you even questioned the validity of my 512K claims on MRAM> Right now we're struggling to get past 512K. This means that MRAM technology, capacity wise, is atleast 20 years behind DRAM. This is a *TECHNOLOGICAL* and *LITHOGRAPHICAL* limitation.

You *CANNOT* have a market or research limitation before you have a *TECHNOLOGICAL* one. Thus, I rest my case. MRAM is not going to be used for even OS storage in the next 5 years. At best, it will replace flash. Or be a cache for RAM to decrease boot times. I could see MRAM cache modules, and operating systems built around them. Just not 'OS on an MRAM pool' setups. Not for marketing reasons, or even cost reasons. It's for TECHNOLOGICAL REASONS.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
This, while independent from spindle speed, together with spindle speed determines the size of a hard drive, per platter.
Did anyone else find this statement troubling? You're saying spindle speed has something to do with the size of a hard drive??????
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
As for the MRAM comments, I am not talking about bringing things 'To market'. I'm talking about *TECHNOLOGICAL LIMITATIONS*. Currently companies are *STRUGGLING* to get RAM, on their best processes, to break the 2 gigabit barrier at decent speeds. This is even for the server market which has a nearly infinite apetite for memory. Espically in super computers. (This is an area where market limitations simply *do not apply* and the best technology has to offer is always considered.) Why? Lithography limitations. Electrical limitations.

The supercomputer market doesn't drive anything. A company can't sustain itself as a memory supplier for supercomputers. The age of proprietary hardware for supercomputers has rapidly come to close. Look at all the new SC's being announced, all off the shelf parts, clusters of Xeons, Apples, Opterons and so fourth. Why spend a fortune on exotic hardware and support for it, when you can buy off the shelf parts and get the same performance for far less? With the vast, vast majority of the computing world still in 32bit land, anything above 2GB DIMMS is basically worthless and sinking large sums of money into pretty pointless. With the move to 64bit computing maybe in the next 5-10 years we will see computing environments that will be sufficiently large enough to support and require having more memory than the 8GB that a standard 4DIMM 64bit platform can support.

Did anyone else find this statement troubling? You're saying spindle speed has something to do with the size of a hard drive??????

No, because it basically does, for both mechanical and market driven reasons. 7200RPM drives are 3.5" platters, 10k RPM drives are 3" platters, and 15k RPM drives are 2.4-2.6" platters.
 

FishTankX

Platinum Member
Oct 6, 2001
2,738
0
0
If that's true then why on earth would companies resort to stacking modules ontop of eachother to get higher final DIMM sizes? I find it troubling that, if what you're syaing is true, lithography could technically push DIMMs bigger and yet still they resort to stacking chips to get higher densities in DIMMs.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Your experience is not to be considered FUD, apologies if I inferred or stated that as it wasn't the point I was trying to make.

FUD, being defined as fear, uncertainty and doubt, can be written in ones post to imply another is employing sinister tactics to besmirch a product/business entity/individual or can be employed to imply that another is merely spreading fear, uncertainty and doubt (which may very well be legitimate). Your experience is not FUD, but leveraging your experience to invoke FUD in others is not to be desired. That was all I was trying to point out, and I may still be doing it poorly.

I use my cached drives to their fullest, but then again I have a UPS on my desktop (it cost $60) and am posting from my battery-enabled laptop now. I do not use ECC ram though, so even I do not go to all possible lengths to protect my data. It all comes down to each user's tolerance for risk of data loss. I'd buy a 128MB cache HD right now if it were faster than my current HD and performed well enough to be deemed worthy of the extra cost.

I think we are both looking at the MRAM situation with the same understanding based on your most recent post. There is no need to explain your perception of the issues to me. I too hold reservations regarding the technological and economical viability if ramping MRAM into the Gb density regime. Then again, my employer (TI) has announced its intentions in the FRAM market so my own inclinations towards MRAM may be biased simply due to the quantity of information on FRAM and MRAM at my disposal via my profession.

Regardless, as a consumer I am personally hoping someone either makes DRAM perform as fast as SRAM or be non-volatile like Flash. As it stands now, DRAM is not really optimized to do anything beyond what it already is employed to do, i.e. perform the role of bulk system RAM. And at the end of the day solid-state drives will simply not be as stellar as people would like them to be simply due to PCI bus limitations. Even if we migrate to PCI express or 66MHz/64bit PCI the bottom line is the ram in the drive is horribly bandwidth limited. We can readily saturate a PCI bus with a simple RAID-0 array, why spend the money for solid state?
 

FishTankX

Platinum Member
Oct 6, 2001
2,738
0
0
Originally posted by: Jeff7181
This, while independent from spindle speed, together with spindle speed determines the size of a hard drive, per platter.
Did anyone else find this statement troubling? You're saying spindle speed has something to do with the size of a hard drive??????

Maybe that statment wasn't too clear.

What I meant is that the higher the spindle speed is, the lower the possible areal density is (Due to the fact that a higher areal density beyond a certain max would make the harddrive less than 100% acurate and consequently make it useless), and consequently, the lower the size of the platter. Areal density itself is the main determiner of the size of the platter, in GBs. But areal density has to decrease for spindle speed to increase.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
Sorry, sport, there is simply nothing you can produce that shows there is some great demand for 4GB+ DIMMS. Sure, there may be isolated customers looking for such hardware, but again, if the market was large enough they likely would already be available. If anything, the market for larger DIMMs has stagnated recently as memory prices have dropped to the point where most people have more than they really need. Gone are the days when a memory upgrade pretty much guaranteed much improved system performance. Few customers need 2GB total, let alone over 8GB. If you do, there are solutions out there that support 16+GB.

Even if we migrate to PCI express or 66MHz/64bit PCI the bottom line is the ram in the drive is horribly bandwidth limited. We can readily saturate a PCI bus with a simple RAID-0 array, why spend the money for solid state?

The benefit of solid state is in microsecond access time, not raw throughput. 133MHz PCI-X has a theoretical 1GB/s throughput, which is ridiculously past what anyone would need for all but the most exotic of situations.

Areal density itself is the main determiner of the size of the platter, in GBs. But areal density has to decrease for spindle speed to increase.

What you are describing is platter capacity, not size, and no spindle speed does not dictate areal data density. In theory it should, it practice, it doesn't. The reason higher spindle speed drives have lower capacity platters is because the size (physical dimensions) of the platters are smaller.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Originally posted by: FishTankX
Originally posted by: Jeff7181
This, while independent from spindle speed, together with spindle speed determines the size of a hard drive, per platter.
Did anyone else find this statement troubling? You're saying spindle speed has something to do with the size of a hard drive??????

Maybe that statment wasn't too clear.

What I meant is that the higher the spindle speed is, the lower the possible areal density is (Due to the fact that a higher areal density beyond a certain max would make the harddrive less than 100% acurate and consequently make it useless), and consequently, the lower the size of the platter. Areal density itself is the main determiner of the size of the platter, in GBs. But areal density has to decrease for spindle speed to increase.

That's assuming technology never improves... with better accuracy, there would be no need to decrease areal density in order to increase spindle speed.
 

FishTankX

Platinum Member
Oct 6, 2001
2,738
0
0
I don't care: The bandwidth will arrive when the technology to utilize it is in place.

If you could design a chip that would glue together multiple 32MB flash chips, if each flash chip could provide 3MB/s (Which isn't that much of a stretch, by flash standards) then 30 of them could easily break 100MB/s and together with their stellar seek time (Something like one mS ) as shown here would be the optimum server platform. The only downside to such a technology would be the server would have to have atleast a couple of thousand pins to access all those chips at once, the PCB would be a complete and total nightmare, and the cost per MB would be astronomical. Another sollution to solve the trace/pin problem would be to have a chip control a set of maybe 8 flash cells and then have a master controller controlling all of the slave controller chips controlling the flash cells (Did that make sense?) and have an upside down tree architecture, that would sovle the pin count. But the technology would still be incredibly expensive!

The technology is avaliable though. The Cray supercomputers have solid state storage arrays that can hit 80GB/s. So the technology is technologically feasable. And probably not terribly bluky either. Just incredibly expensive.

MRAM *probably* wouldn't be much better in expense, but would be worlds easier to design for. It seems like any electrical technology is significantly more expensive than their mechanical counterpart, in the storage arena. What do you expect though? They're orders of magnitude simpler, construction/fabrication wise and don't require expensive lithography equipment.
 

FishTankX

Platinum Member
Oct 6, 2001
2,738
0
0
Originally posted by: Jeff7181
Originally posted by: FishTankX
Originally posted by: Jeff7181
This, while independent from spindle speed, together with spindle speed determines the size of a hard drive, per platter.
Did anyone else find this statement troubling? You're saying spindle speed has something to do with the size of a hard drive??????

Maybe that statment wasn't too clear.

What I meant is that the higher the spindle speed is, the lower the possible areal density is (Due to the fact that a higher areal density beyond a certain max would make the harddrive less than 100% acurate and consequently make it useless), and consequently, the lower the size of the platter. Areal density itself is the main determiner of the size of the platter, in GBs. But areal density has to decrease for spindle speed to increase.

That's assuming technology never improves... with better accuracy, there would be no need to decrease areal density in order to increase spindle speed.

Not necescescarily.

Areal density is increased by two things - increased acuracy and platter technology.
 

FishTankX

Platinum Member
Oct 6, 2001
2,738
0
0
Originally posted by: Pariah
Sorry, sport, there is simply nothing you can produce that shows there is some great demand for 4GB+ DIMMS. Sure, there may be isolated customers looking for such hardware, but again, if the market was large enough they likely would already be available. If anything, the market for larger DIMMs has stagnated recently as memory prices have dropped to the point where most people have more than they really need. Gone are the days when a memory upgrade pretty much guaranteed much improved system performance. Few customers need 2GB total, let alone over 8GB. If you do, there are solutions out there that support 16+GB.

Even if we migrate to PCI express or 66MHz/64bit PCI the bottom line is the ram in the drive is horribly bandwidth limited. We can readily saturate a PCI bus with a simple RAID-0 array, why spend the money for solid state?

The benefit of solid state is in microsecond access time, not raw throughput. 133MHz PCI-X has a theoretical 1GB/s throughput, which is ridiculously past what anyone would need for all but the most exotic of situations.

Areal density itself is the main determiner of the size of the platter, in GBs. But areal density has to decrease for spindle speed to increase.

What you are describing is platter capacity, not size, and no spindle speed does not dictate areal data density. In theory it should, it practice, it doesn't. The reason higher spindle speed drives have lower capacity platters is because the size (physical dimensions) of the platters are smaller.

I highly doubt that.

If that's true how come the raptor has to move up to 2 platters to hit 72GB when there are 80GB 5400RPM platters? Surely half an inch shouldn't account for a *halving* areal density..

And the 15K drives, they use 18GB platters. That's another halving. Which doesn't make any sense. It should only be a 50% decrease, if that.