• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

hard drive speed...many questions regarding possibilites of faster output

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
i have been reading a lot about hard drives lately, and i came upon a question that i stumped myself with.

why cant you use more than 1 "eye" to read the information off the platter? i have taken apart a few drives to see the guts, and there is always a single arm with the insanely small parts on the end that obviously read the platter. why cant you just use more than 1 of those?

if you had 4 arms instead of 1, couldnt that really help the speed of the seek? i know there would be a lot of problems making them all be insync, but all of the engineering could be taken care of by all of those smart people. but why is this not possible? or why hasnt it been done?

maybe you would have to make the drive longer, but i dont see why that would be a problem as long as it would fit in a 3.5'' bay...i mean you could make them taller and longer...hell, look at the geforce 5, it takes 2 expansion bays!

also, why dont they make IDE drives that spin faster than 7200? what is holding them back? obviously reading from media spinning faster than that is possible...SCSI is up to 15k, so whats the deal?



and last but not least...what is with the buffer? why cant they just put a TON of buffer memory on the drives? i cant think of a reason why they limit most drives to 2, the special ones to 8, and SCSI's only go to 16 (well, a few go to 32 if im not mistaken), but wtf? why cant they put like 64, or even 128?

one problem i see with that is you may want something off the drive, so it sends it to the cache, and then you want something else instead, but the first bit of data has already been cached and will be sent first, which would get in the way and slow things down until the data packet is destroyed...or maybe im totally wrong on that, but thats what i thought.


anyone care to shed some light on this?
 

damonpip

Senior member
Mar 11, 2003
635
0
0
I think awhile ago I saw something about some company trying to use more than one head to read each platter, but I have no idea where I saw this.

I read before than the reason IDE is only at 7200 is because SCSI tech is developed first and then slowly transitioned to IDE, that also explains why 15k is so damn expensive.

For the buffer, there may be a few reasons, first, competition is very fierce, and even adding $5 for extra buffer could hurt them. Also, much more importantly, you can't just add buffer, new firmware has to be written at the very least, more likely new hardware has to be designed. Super expensice SCSI controllers can get extra cache, which is similar to buffer.
 

RyanM

Platinum Member
Feb 12, 2001
2,387
0
76
I rather think having two heads would be better than one. Literally.

They could read the platter as if it was striped. One head reads the innermost ring, the other head reads the ring just after that. As the first head comes to the end of the first ring, to where the second head began reading, it skips up to the next ring, and the second head skips from 2 to 4, and so on a so forth. As they're reading, the harddrive's firmware is splicing the two streams together to be one congruent stream, caching it, and shooting it off when the CPU needs it.

This would not only increase data transfer, maybe not 200%, but certainly at least by 40 to 50%. And it would add redundancy - If the drive detects a malfunction of one head, it can move it off the platter and just stream the data from the remaining head, allowing the data to be transferred elsewhere before the drive has to be RMA'd.

I'm no engineer, but a similar system was done with CD-ROM drives (TrueX), reading multiple tracks at once. I see no reason something similar to this for magnetic drives couldn't be developed.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Originally posted by: MachFive
I rather think having two heads would be better than one. Literally.

They could read the platter as if it was striped. One head reads the innermost ring, the other head reads the ring just after that. As the first head comes to the end of the first ring, to where the second head began reading, it skips up to the next ring, and the second head skips from 2 to 4, and so on a so forth. As they're reading, the harddrive's firmware is splicing the two streams together to be one congruent stream, caching it, and shooting it off when the CPU needs it.

This would not only increase data transfer, maybe not 200%, but certainly at least by 40 to 50%. And it would add redundancy - If the drive detects a malfunction of one head, it can move it off the platter and just stream the data from the remaining head, allowing the data to be transferred elsewhere before the drive has to be RMA'd.

I'm no engineer, but a similar system was done with CD-ROM drives (TrueX), reading multiple tracks at once. I see no reason something similar to this for magnetic drives couldn't be developed.

exactly my thoughts on the 2 or more head system...


what about the buffer though?

damonpip gave some good info, but i still dont see why they dont do it? sure, it would be more at the beginning but it would make the drives SO much faster and i dunno, maybe it would be further advancement into fulfilling the potential of an IDE drive..or something like that..


i think using a dual or quad head system would be sweet and probably a TON faster...cutting seek times down literally with respect to the "head count" in the drive




if SCSI tech is developed first, why havent they moved to 10k yet? that has been out for a good while
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
There was a SCSI drive 7 or 8 years back that you had 2 heads per platter side, maybe even a 4 head per platter side version, but it was short lived due to higher cost & more moving parts to fail.

The time you're talking about speeding up with it has nothing to do with "seek", its the access time -- with 2 heads the drive never has to spin more than half a rotation to get any piece of data into position. Basically the access time of a 7200RPM dual armature drive is the same as a normal 14400RPM drive would be. And sequential read/write speeds are doubled as well.
 

damonpip

Senior member
Mar 11, 2003
635
0
0
There's a few 10k IDE drives coming out soon IIRC, I'd expect the main thing slowing them down is developing the technology enough to bring the costs down to "IDE" standard costs, IE $200 for an EXPENSIVE drive, an expensive scsi drive can easily run $500
 

RyanM

Platinum Member
Feb 12, 2001
2,387
0
76
Glug - So what you're saying is, a dual-head drive that was 7200 RPM would have 14400 RPM thoroughputs in nearly every measure of performance? Hrm....to me, the added cost of a second head and the more complex logic seems like it would offer a better price/performance over a drive that needs extra vibration dampening and higher quality motors and such.

Thoughts?
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Originally posted by: MachFive
Glug - So what you're saying is, a dual-head drive that was 7200 RPM would have 14400 RPM thoroughputs in nearly every measure of performance? Hrm....to me, the added cost of a second head and the more complex logic seems like it would offer a better price/performance over a drive that needs extra vibration dampening and higher quality motors and such.

Thoughts?

very well put...i am curious to know what people have to say about this as well



i still dont see why the added cost would be bad...once people experienced the benefits, im sure the added cost wouldnt hurt them
 

damonpip

Senior member
Mar 11, 2003
635
0
0
Yes I agree, there'd be a huge jump in performance. I can't imagine they haven't made a SCSI drive with this yet, maybe there's some other issues?
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Originally posted by: damonpip
Yes I agree, there'd be a huge jump in performance. I can't imagine they haven't made a SCSI drive with this yet, maybe there's some other issues?

yeah i dont see why drives like this arent mainstream for at least servers, if not the average consumer
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
I am not sure, but I would think the problem of keeping the 2 heads exactly synced would be a problem.. As I see it metal (and other material) expands as heats... with all those parts zooming around there is going have to be some adjustments made to keep the head on the correct spot on the harddrive.. so you couldn't keep the two+ linked by mechanical means, because then you are going to lose a quite a bit of density on the HD in order to compistate for the different rates of expansion/twisting/tweaking that the metal goes thru as the HD warms up.. so then you would need different motors and controllers in each head, then you run into the problems of keeping it all in sync anyways... I suppose when reading from the HD you'd just use what ever head got there first, but that wouldn't work for writing... Now you've kinda reached the same complications of having to stick 2+ HD's in one case when people are using RAID arrays that pretty much do the exact same thing... With the plus side if one the componates failed you could swap out that HD and replace it with a new one, if just one of 2/4/6 heads failed that whole HD would be toast... It probably would work and you could get faster thouroughput, but would it work commercially?

But how about this idea? I was thinking about burst speeds of HD with the read-a-head logic, HD caches and main memory managment on a motherboard... Ok so the Linux kernel (which I know a bit about) tries to use all of the RAM up all of the time.. Once Info gets put up there it doesn't want to realese unless it has too. It does this buy assigning values to each byte, you know one that says how long ago has this been accessed and how many times it's been accessed... So that it can try to strike a balance between bytes that are only going to be used once (like copying info from a cdrom) vs bytes that are going to be used over and over again (like a texture in quake). The kernel can't know what you will want it for but the programmers built in a kinda educated guess system... The goal of course is to avoid page faults as much as possible (ie make sure the info is in RAM were you need it instead of having to fetch it off of the HD)
I got all this stuff from a lecture log here and his visual aids here. if your interested...

Now HD have like 8Megs of cache built into them for read ahead type stuff, and to act like a buffer, right? SO why not put inbetween the HD and the motherboard a extra module of RAM, like maybe a extra 3 gig of cheap high latency RAM along the same lines as PC-66 or PC-100 class RAM? You could create a device that would intercept information requests coming from the motherboard, then pass along the information request while making a record of it, then as the information is pass thru make a copy of it. Next time your Motherboard needs the same info again, it would forward the copy instead having to read it from the HD. And since it would be intercepting the information it would simple update it's copy of the information incase it gets changed on the HD... You could also use it as a buffer for writing to the HD (like what linux does to it's ext2/3 formats, async. reading and writing) and you could select how much off a buffer you'd want dedicated towards reading vs writing... ANd if the ram buffer was to fill up or the HD couldn't keep up then the whole proccess would just slow down to the normal speed of the HD... But I suppose the HD is going to have the same information requested from it over and over again, with a lot more reading then writing, as most of the transactions would be delt with in the buffer RAM then the HD could be kept spinning with requests that it buffer RAM couldn't deal with.... after awhile the buffer RAM would keep a record of the most accessed data, and simply automaticly pass thru HD requests that were unusual or single time requests...

Just think, out of a 80 gig HD how much of it's information is being accessed all the time and how much is it going to be just mostly static information? The core of the OS is only going to take up about 500 to 700 megs at most.. how much of that is ever realy used in a hour of normal use.. you would basicly running your entire computer from only about 2 gigs of information at most.... I am sure that you could just get a massive amount of RAM on your Motherboard, but how expensive is that? It all must be the same speed and very fast RAM costs very much, you could have very very slow ram in the RAM buffer and it will still be 100,000 times faster than any harddrive. Just give the HD and buffer it's own battery backup and if the computer gets shut off suddenly then for a little bit (less then 2-3 seconds is a rough guess) the HD could still be spinning away, writing the last bits of information before the power loss. And if the small battery is not able to finish the transactions it would just shut the HD off and go into supsend mode, keeping the RAM buffer intact until you can restore power to the computer... you wouldn't nessecarally have to get it booted either, once power goes on the HD could finish what it was doing before it got shutoff.. then shuts off...

The main problem I could see in this would be large databases, the information is stored in mostly static form.. The mega RAM buffer would not be all that usefull there, but then again in large application servers and information servers like web servers this thing could provide the the same perfomance as a large array of RAID'ed drives, with a much smaller RAID...

Just a idea... :/
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
thanks for the links drag, and very good info and ideas...


regardless of how hard it may be, im kinda just curious if it is even possible to make hard drives like i described in my first post work and actually be almost twice as fast,

because multiple head hard drives would pwn j00


i still dont understand very well why a huge buffer would hurt the performance, or, if not hurt the performance, not help it very much...
 

Hazer

Member
Feb 16, 2003
104
0
0
A dual head per platter side system would not increase performance 2-fold. Your saying that having 2 heads per platter (one to read the outer half of the platter, while another reads inner half of the platter) would produce an increase in reading/writing time. But seek time would still be the same.

For IDE, some of the best seektimes on the more expensive units is around 9ms. They average 45 MB/s. Or 45kB/ms. You can transfer 405 kB in the same time it takes to find each file. For large files, a dualhead system would double performance for reading large files. It would do nothing for reading small files.

Problems with dual head systems. The arm holding the head looks alot like an old LP arm. Its at an angle to the disk. Attaching another arm means that it would extend from the original, and then bend to be parralell to the original head. It would look like an 'h'. You would need to do this because if the arms were at different angles, you wouldnt be able to keep equal distance switching from one track to the next. You would lose disk capacity since the outer arm would shift more than the inner arm (from track to track). Also, since writing includes the start sector, one head would record this info, while the other head would simply 'do nothing'. This would contribute to more loss of disk capacity. Add to the fact that the arm is controlled by a voice coil system, your adding weight to the arm assembly, for each arm (one voice coil moves all of the arms) and would have to have more stricter tolerances (cost more in design).

Add also that you would have to split the paths and conjoin the data (your reading/writing 2 different heads) would increase and alter controller configuration.

If you used a 2 head system (both heads were to read the same track) you could improve seek time (only). It would not be able to increase transffer rate. Although, Im not sure having an extra head 'looking ahead' on the same track would be any faster than 9ms limit. I always thought that RPM had the most impact on seek time. Hence the reason why most 15KRPM drives are down to 3.5ms seektime.
 

Hazer

Member
Feb 16, 2003
104
0
0
To drag: Interesting thoughts, but I think this thread was improving the HDD with its access to platters. Since most HDD have a buffer, doing anything beyond that is kind of arbittrary.

Using an ATA133 method: The interface between the HDD controller, the IDE cable, and the PCI bridge can move data at maximum of 133MB/s (fits with PCI33MHz x 32 bits / 8bits/byte). But the problem is that most EIDE ATA drives can maxout a sustained transfer rate of 40-45 MB/s. This is due to seektime and transfer rate of reading/writing. Using a buffer (which is now standard) only helps a little to increase the HDD limitation. The wat it works is that the software will ask the CPU to get something from the HDD. The CPU sends the command, the PCI bridge then tells the HDD it needs something. The HDD looks for the file and starts loading it into the buffer. The PCI bridge then starts transferring the HDD buffer across the IDE cable and sending it to the mian system mem. It only has a 32 bit path. While the PCI bridge starts on the first part of the data, the HDD will keep filling the cache as fast as it can. If the file wanted is smaller than the cache size, the HDD will still keep filling the cache, because the CPU may (or may not) ask for the next file that starts directly after the first one. This is where 'look-ahead' caching comes in to play. Its a trick used to increase performance, but does so only in the slightest of cases.

The real limitations af a HDD always come back to seektime,RPM, and disk density. Using caches and other techniques were simply a way to get a marginal 10% increase.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Originally posted by: Hazer
To drag: Interesting thoughts, but I think this thread was improving the HDD with its access to platters. Since most HDD have a buffer, doing anything beyond that is kind of arbittrary.

Using an ATA133 method: The interface between the HDD controller, the IDE cable, and the PCI bridge can move data at maximum of 133MB/s (fits with PCI33MHz x 32 bits / 8bits/byte). But the problem is that most EIDE ATA drives can maxout a sustained transfer rate of 40-45 MB/s. This is due to seektime and transfer rate of reading/writing. Using a buffer (which is now standard) only helps a little to increase the HDD limitation. The wat it works is that the software will ask the CPU to get something from the HDD. The CPU sends the command, the PCI bridge then tells the HDD it needs something. The HDD looks for the file and starts loading it into the buffer. The PCI bridge then starts transferring the HDD buffer across the IDE cable and sending it to the mian system mem. It only has a 32 bit path. While the PCI bridge starts on the first part of the data, the HDD will keep filling the cache as fast as it can. If the file wanted is smaller than the cache size, the HDD will still keep filling the cache, because the CPU may (or may not) ask for the next file that starts directly after the first one. This is where 'look-ahead' caching comes in to play. Its a trick used to increase performance, but does so only in the slightest of cases.

The real limitations af a HDD always come back to seektime,RPM, and disk density. Using caches and other techniques were simply a way to get a marginal 10% increase.

i like your statements and all, but i dont think you are correct. granted, i know nothing really, but i dont think you understood me correctly. i didnt mean put two arms to look like an "h" on the same head. i meant 2 different heads on opposite sides of the platter. they would not have to be joined together. and if you had a HUGE cache, i dont think that would be arbitrary at all...having a huge cache would improve performance. end of story. i thought about this all day and i dont see how it could possibly NOT help.
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Hazer:

I think you have the wrong idea about how 2 heads per platter side works (worked). It's not one head for inner tracks & one for outer. The heads are spaced apart on the disk 180 degrees so a track is read every 1/2 spin by using both heads simultaneously. Seek times are not improved but access times are like those of a drive with double the RPMs.
 

Hazer

Member
Feb 16, 2003
104
0
0
MrDudeman: HDDs have a head for each side of each platter already. A 2 platter HDD has 4 heads. All 4 heads are attached to a spindle that is rotated through the use of one voice coil. The 'h' arm was my idea of using a 2 head-per-platter-side idea that was brought up by Machfive.
Also, the larger the cache, the better the performance yes. But the performance increase is marginal. I thought you were trying to bring something up that would really increase HDD performance. Increasing the cache is something already available to HDD manufacturers, but they dont stick 64MB caches for a reason. It costs too much money, and the performance increase from it doesnt warrant it. You have to remember, cache is a trick to increase performance. Its effects are minimal on overall performance. I used an example once: If you want to fill a bucket of water, but the tap only pushes out so much water/second, one way of increasing the spped to fill up the bucket is to fill a baloon first, then pop it over the bucket. The bucket fills faster, but you neglect how much time it takes to fill the baloon up.

gluggglug: So your saying to have an arm on opposite sides of the platter. Not to increase seektime, but just for transferrate. This means that data would be read/written on directly half of the platter by cutting the sectors right down the middle. So, kind of like a 2 HDD RAID system, half of the data would be written on one side, and the other half would be written on the other side. This would require two voice coils, that would have to be totally in synch with one another. In order to do this, your talking about extreme increase of seektime ( increase, as in it would take longer) for the two voice coils to make sure that they were on the same track before ever getting around to the read/write command. Not to meantion, since you would have one set of heads be the determining set to start the reading/writting process, whenever it seeks the start point, it would have to wait for the side of the disk its suppsoed to access come around. Another increase in seektime. And then of course, there is time wasted whenever the arm has to switch tracks. It would have to swicth tracks twice as often.
Manufacturing alone means adding a whole nother set of heads and voice coil, not to meantion the headache of programming in a synch utility into the controller.



Keep thinking guys, you may just stumble onto a project Ive been working on for the past 3 months. It would not change any of the platter/voice coil/arm/head manufacturing, but would increase transferrate ALOT. Unfortunately, my idea doesnt improve seektime. So for small files, things would be just as bogged down as an original drive. Im still working on that, but I dont think anything to date is gonna help. The only thing Ive seen that increased seektime enough to be meantioned has always been due to RPM.

PS: A nice link to HDD inards



Link
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Originally posted by: Hazer
Not to meantion, since you would have one set of heads be the determining set to start the reading/writting process, whenever it seeks the start point, it would have to wait for the side of the disk its suppsoed to access come around. Another increase in seektime.

Either head can read/write from any part of the disk, as determined by which side it happens to be spinning under next so that you never have to spin more than halfway around (once seeked to the right track). Although I'm sure this adds a lot of complexity to the buffering logic as you can't count on the 180 degree head separation being exact.

PS: A nice link to HDD inards
Link

The way that page (which really isn't THAT old) is dated is funny:
Fierce competition between the drive manufacturers has pushed the cost of one MB of data to a very small number of $1 to $2 per MB making a HDD of several GB in capacity relatively inexpensive and easily affordable by almost anyone.

 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
This is what, the third thread on the topic? I forget... but anyways...

"Bigger physical size" : Been there, done that. Anybody remember Quantum's Bigfoot series? Ever wonder why it was discontinued? Yeah, you get the picture.

"Internal RAID idea" : Difficult, I would think. At least, to fit it into a standard drive bay. So, if you stack two hdd's and splice a raid controller in between to interface with the IDE cable, that would work. However, that becomes rather pointless rather quickly.
If you do try to do something like that in a single hard drive, the sheer complexity, associated cost, and associated fail rates seem to increase by a factor of two or more. My statistics is limited, so I'm not quite sure. At any rate, I do know that fail rate would balloon when attempting something like that, and if there's anything learned from the 75GXP fiasco, it's that high failure rates will completely destroy even a deeply entrenched company's reputation.

"Bigger cache size" : Reason for so many HDD's coming out with 8 MB cache may not be due to "the customer wants it." It's probably has more to do with the fact that manufacturing old 2MB modules is cost prohibitive. Memory manufacturers are losing money left and right and having an old fab cranking out old chips which cost just as much (if not more) to make than newer chips is not a good business decision. Hence, pop out only one type of chip (the newest) and give whining HDD manufacturers the finger.
Also, as you increase cache size, performance increase vs cost increase declines rapidly. If increasing cache is such an elegant solution, there would be no reason for RAM in your desktop. L1 (there would be no L2) cache on chips would increase to the GB range to accomodate all the data necessary to run your computer and HDD's would exist only as data storage when the power's off.
If you're beginning to think of arguing for using SDRAM between the HDD and the PCI bus, do remember that some companies do manufacture RAM drives, and note how expensive they are and the disadvantages of using such a system for everyday use. Also note that the type of cache scheme you're describing is already in use with just about every operating system on the planet. Why do you think we even have RAM in our computers in the first place?

"Higher RPMs for IDE" : I think you're overlooking a key component to SCSI drives. Yes, they have faster RPMs, and yes, it's partly due to SCSI being pioneers for IDE technology. However, if you'll take note, SCSI drives do not have anywhere near the areal density of IDE drives. Why? Well, it's quite difficult to maintain data integrity when your hard drive is spinnning faster than your car's engine. The highest data density for SCSI that I know of is 36 GB platters, and only on the latest drives. My current drive uses a single 18GB platter at 10kRPM, and my previous used 9GB also at 10k. IBM seems to have used 3 GB platters for quite some time. Most of the IBM SCSI drives I see for sale (well, all so far) have used 3 GB platters. Don't let Seagate's massive Barracuda 180's fool you. Those use 12 15? GB platters at 7200 RPM.
Maxtor seems to have trouble putting out 80GB platters for the DM+9 series, which runs at 7200 RPM. I got lucky and snagged one with 8MB cache from CompUSA, but that's after looking at serials of Fry's, Best buy, and CompUSA for almost five weeks.

In summary, why are IDE drives so limited? Cost, cost, and well, cost. IDE was originally designed to shave costs (integrated packaging is cheaper than seperate aka. SCSI) and until the cost of certain SCSI technologies fall, they won't show up in IDE form. Either way, companies will always have at least one model of IDE drive that's the cheapest that can be manufactured.
 

TerryMathews

Lifer
Oct 9, 1999
11,464
2
0
My guess is that the next major speed boost in IDE HD technology will be from moving away from binary data storage on the drive. It's the only real way anymore to rapidly increase the amount of data a drive can read in a given amount of time without massive improvement in spindle rotation.

For example, a drive based on quad could report each 'bit' back as a pair of bits, where the platter stores 0,1,2,3 and the drive reports 00,01,10,11. Would require a head capable of making and recgonizing 4 different discrete magnetic fields. Relatively simple compared with increasing spindle rotation 400% or more.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
The water and the bucket doesn't analogy doesn't hold "water" in my idea... The concept is not just to read-a-head or to act as a "buffer" persay, but to act as a sort of level 2 cache between the harddrive and the main memory and the proccessors and there support...

It would not just hold the water to be flushed out, but the water is self-replicating in the bucket. Meaning entire large hunks of the most accessed parts of the harddrive would be stored in the mega-buffer, so that most of the time, read requests do not actually make it all the way to the harddrive, and it could act as a cache, like one on a linux harddrive. which you could fine tune-how much of the mega-buffer is for storing backuped up memory and how much is devoted towards a ultra-ultra highspeed harddrive "mirror".
This way the thoutough put of the harddrive could be artificially increased, because as the mega-buffer is taking care of the most asked for data the harddrive would deal with just "unique" requests that wouldn't make sense to keep a copy of. And then when the harddrive is "idle" then the mega-buffer could send updates to the parts of the harddrive that it intercepts. The parts of the harddrive that would be choosen to "not" receive data directly and used just as a "backups" would be selected using the similar proccess that is used to selecting paging space in main memory. Streaming data requests that is going to be access just once or twice is going to ignored by the megabuffer and is just sent directly thru to the harddrive itself, but the info request that happen hundreds or thousands times a day will be stored on the megabuffer and requests to the harddrive will be stopped until it has idle time for the updates. The intercepted requests (well all of them would go thru megabuffer, but I am talking about ones that are stopped) would made on a hiericarcal basis, so that as more important interceptions are discovered by the megabuffer the less important ones would be discarded. That way as the megabuffer "learns" the more effective intercepts the performance of the megabuffer/harddrive would actually GROW over time. The first time you would run a benchmark the performance would not be that better than just the HD alone. But each time you'd run it, it get faster and faster, until it leveled off...

The buffers used currently don't act like this, do they? So it would be a waste of money to spend on enormous ammounts of extra ram for the buffer the current way they are run.

Yes you could do the same thing with a gigantic amount of main memory, say 2-3 gigs, But that stuff is expensive, ANd how much can the proccessor can interact and proccess in a immidated clock-to-clock activity? Wouldn't make sense to keep a gig or so of super-high speed very expensive RAM to keep the proccessor busy, while you have 6 or 7 gigs of cheap/slow memory to act as a combination cache, megabuffer...

You could put the megabuffer on a card and would run maybe a RAID array behind it... The OS and mainboard doesn't even have to know it's there for it to work, but I suppose their could be optimizations in the kernel that would nessesitate drivers.....

It would be the I/O equivilent of a video card, but instead of diverting proccessing power, it would be divererting the I/O power of the harddrive/motherboard filling the 1.5Gb promises of SATA, until we create solid state large storage devices of a high speed perminate storage nature. Because the whole plater thing is just not going to cut it a cerain speeds.

Since HD is 100,000 times slower than main ram, what if the majority of the time it just stuck with something that was 100 times slower 75-80% of the time?

Instead of a water bucket under a dripping spiget, that would drain until the spigit fill it up again, it would generate it's own water and let the spigit drip by itself, if that would increase performance, if not then it would catch the drips and replicate them for future use.

Unless of course each time the drive is access it is requests a different section of the harddrive each and everytime then the megabuffer would be useless, exept as a cache for the main memory.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
maybe a megabuffer is not a correct term... Um how about

High-Traffic-Only-Ram-Based-Filesystem-with-Physiscal-Storage-HardDrive-Backup-Array

a HTORBFPSHDA? with a logical file arrangement based on topical sector locations of supported Hard drive arrays, held idependent of System RAM, with optional size-morphing level 4(?) system cache.

or maybe RFoRB... Ram Files nOt Realy Buffer ?
 

RyanM

Platinum Member
Feb 12, 2001
2,387
0
76
Considering the price of SDRAM, I haven't figured out why no one has made an IDE or, now, SATA based RAM drive. With DIMM module slots. "It comes with 512 MB from the factory, has 6 slots, and supports up to 3 gig of RAM. Takes full advantage of SATA for constant 150 MB read speeds."

Drool much? If they integrate support for one of these at the chipset level and give its own dedicated bus, even at 250 MB or 400 MBps, it'd be a huge increase over the speed of hard drives. It would also be cheap, since the product would only consist of a 512 meg stick, the logic circuity, and some DIMM slots.