Why can't they make a harddrive use raid internaly..

Becks2k

Senior member
Oct 2, 2000
391
0
0
So why can't you take 2 40gig platteres, and have it read/write 2 bits at a time... 1 on the top platter 1 on the bottom... or 4bits at a time right? top and bottom of each platter.

With just 2 platters that could read/write at once you could get 4x the read/write. It doesn't seem like it should be that hard.

seems like it would be alot more efficient to do raid0 liek that than on a external card with 2 harddrives... the raidcard would only hav to do redundancy then.

They have the multilaser cd readers... BAH I WANT IT i don't care if its too hard to do. It can't be impossible.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
I think he's not asking something that would cost more. right now, i imagine one side of one platter is used at a time. Why not just write on every platter? This presents no new physical problems. Basically, instead of writing 4 bytes one one side of one platter, write one byte on each side of 2 platters at the SAME location. this allows you to keep the heads as one unit. it should add minimal controller requirements, while presenting a 4x throughput speedup for a 2-platter drive. Or do they do this already?
 

Becks2k

Senior member
Oct 2, 2000
391
0
0
They don't cause a 4 platter drive isn't faster than a 2 platter.

They don't do it on each size of a platter because both sides of platters arn't always used.

IE cuda iv comes in 20 40 60 and 80... the 80 had 2 platters, 20gigs on each side

the 60 has 2 platters, 1 side of 1 platter doesn't have a head.


Another question... if access times are so important.... why do they have such wide platters?
If you took a normal 40gig platter and iddn't let it use the inside 20gigs... that'd decrease the area the heads had to cover by more than 1/2.

Wheres access time come from? Moving the head and once it gets there waiting for the platter to make at most 1 revolution?

a 7200rpm drive the 2nd part is only at most .138ms? If you didn't use the inside 20gigs your fullrange access times would go down by more than 50%... I dunno how the avg access time would change.

/me just wondering why they don't ever focus on speed as much as size.... only speed improvements on ide drives seem to be from putting more data in teh same space.
I dunno but 160gig drives is like 16 times as much space as the first 9.1gig drive... but its access times are basically the smae, and its transfer rate is only bout 3x as much.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
That's because the read/write channel signal processing is one of the most expensive bits on a harddrive.

There have been Seagate high-performance SCSI drives (the original Barracuda series) that actually
used two independent read/write units - including two sets of heads! - for twice the performance.
That idea was neat, yet vanished pretty soon after because the drives were also twice as expensive
as others ...

regards, Peter
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81


<< That's because the read/write channel signal processing is one of the most expensive bits on a harddrive.

There have been Seagate high-performance SCSI drives (the original Barracuda series) that actually
used two independent read/write units - including two sets of heads! - for twice the performance.
That idea was neat, yet vanished pretty soon after because the drives were also twice as expensive
as others ...

regards, Peter
>>



sure, seeking would be really hard with independent heads, but if you just use every platter at once at the same location, you dont have to worry about additional head positioning. And I can't imagine the PCB for a hard drive costs more than $10 or at MOST $20 - so even if you have to double costs (dont see why!) to write with every head at once, it wouldn't cost that much more.

This would also probably quadruple the data per track (assuming 2 platters), resulting in fewer required seeks.
 

Agent004

Senior member
Mar 22, 2001
492
0
0


<< And I can't imagine the PCB for a hard drive costs more than $10 or at MOST $20 - so even if you have to double costs (dont see why!) to write with every head at once, it wouldn't cost that much more. >>



Well, consider the hard drive are getting bigger and often the best way to sell is place bottom low price. Just look at last year(financial year) at WD, they made $1 millon after everything, which is not a lot consider you are selling millions and millions shipment of hard drives

So cost is the only concern for manufacturer right now.
 

Locutus4657

Senior member
Oct 9, 2001
209
0
0
RAID significaltly reduces access time as well as providing a larger pipe. The only reasons have to do with cost/complexity/reliability



<< Cost mainly.

For most people, Access time is more important than trasfer rate.
>>

 

syadnom

Member
May 20, 2001
152
3
81
ok, this is what i think about the "raid" inside of the disk enclosure.

It should be very doable, just a processor to split bytes to bit pairs in two platter disks, and a small controller to controll which head reads which platter surface. so 1byte=8bits , 8bits/4sides=2bits. each head writes 2 bits of the byte at the same time.

this would make a "stripped platter array" . The processor would easily be able to controll two way conversion of bytes->bit pairs and control routing to the proper head which should improve performance for both read and writes up significantly. seek times would not change in reallity but would seem much faster.Burst transfer AND sustained transfer rates would be very good.

i say that the seek times wouldnt change because the heads do not move any faster across the surface of the disk, nor does reading from multiple platters at once improve the moving speed of the heads, it would seem to seek faster because the quick read/write times would allow the disk to begin a seek much sooner.

 

Shalmanese

Platinum Member
Sep 29, 2000
2,157
0
0
To get RAID internally on a HD, you basically need a RAID controller, twice the amount of arms and twice the surface area to write info. This means that you will increase cost and decrease reliability. Compared to a software RAID w/ 2 HD's the cost would not be all that much less since all you are saving is the cost of an enclosure. The most expensive part of a HD (I think) is the arm itself becuase it needs to be incredibly finely tuned to keep nanometers off the platter.

With 2 HD's, you have 3 seperate parts that can go wrong and are replacable (2 HD's and a controller). With the raid in a box, you only have 1 and if you lose it, you lose it all.

In short, there is not a large enough market to justify the small cost saved over the drop in reliability to make it worth marketing a new drive.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81


<< To get RAID internally on a HD, you basically need a RAID controller, twice the amount of arms and twice the surface area to write info. This means that you will increase cost and decrease reliability. Compared to a software RAID w/ 2 HD's the cost would not be all that much less since all you are saving is the cost of an enclosure. The most expensive part of a HD (I think) is the arm itself becuase it needs to be incredibly finely tuned to keep nanometers off the platter.

With 2 HD's, you have 3 seperate parts that can go wrong and are replacable (2 HD's and a controller). With the raid in a box, you only have 1 and if you lose it, you lose it all.

In short, there is not a large enough market to justify the small cost saved over the drop in reliability to make it worth marketing a new drive.
>>



I dont think you understand what we are describing. Think about it like this. You have a platter and an arm. the arm has one head on each side of the platter. normally, only one head is used at a time, so the other sits idle, its potential speed left unused. Now, why not change this a little. have BOTH heads on that one arm read and write at the same time. when you want to write two bytes, instead of having one head write both, have the top head write the first byte and write the second byte on the bottom. if you always do every other byte like this, the controller should need minimal changes - same for the heads. this would double throughput, not change seek times, and not decrease effective drive space. it would also not really affect drive reliability much. it would, at worst, make a bad sector affect 2x as much space in this setup.
 

Becks2k

Senior member
Oct 2, 2000
391
0
0
"So cost is the only concern for manufacturer right now. "

The special edition wd drive with 8mb of cache is like $100 more than one with 2mbs... and the performance is a little bit better. You don't think people would pay $100 more for a drive that would easily fill a ata133 controller? Or more likely they could put it in scsi drives frist..... if you're spending $600 why not $800? You'd sell more harddrives cause everyone woudl want it, and you can charge more and it seems like it would barely cost more. profit + profit = bigger profit


BTW about reliability..
raid0card + 2 drives
vs
2 drives with internal raid0

In the first if one of the drives dies, you lose all the info. In the 2nd if one drive dies you lose 1/2 the info. Plus the internal raid would give better performance (i'm only guessing because I don't see how you can't get 2x the read/write speed)



Theres gotta be someone who works with harddrives to say why this isn't a good idea. It seems too easy to do.
 

mjquilly

Golden Member
Jun 12, 2000
1,692
0
76
well - making a hdd do internal raid functionality would be a really bad idea, wouldn't it? Other than for RAID 0 i mean. If you had a drive doing raid 1, 3, 5... internally, and a part of the drive failed (rem it's like having multiple drives in one so a greater chance for failure) - the whole thing is shot, you can't just replace one disk.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81


<< well - making a hdd do internal raid functionality would be a really bad idea, wouldn't it? Other than for RAID 0 i mean. If you had a drive doing raid 1, 3, 5... internally, and a part of the drive failed (rem it's like having multiple drives in one so a greater chance for failure) - the whole thing is shot, you can't just replace one disk. >>


i'm only thinking raid0 here. you could do striping or mirroring fine... I think doing anything else would be kinda stupid, since I haven't ever heard of one head failing. but mirroring is also not worth the time - it would halve the space while raising costs. striping would double the speed for minimal cost increase.
 

Gunbuster

Diamond Member
Oct 9, 1999
6,852
23
81
I think the main problem is thermal
If you write with both the top and bottom head at the same time they need to be in the exact same place in relation to the platter

You have no way of doing thermal calibration between the 2 heads when they are on the same arm
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81


<< I think the main problem is thermal
If you write with both the top and bottom head at the same time they need to be in the exact same place in relation to the platter

You have no way of doing thermal calibration between the 2 heads when they are on the same arm
>>


why would it be different? the two heads are always doing EXACTLY the same things.
 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
Hm... Lemme think. If implementing raid 0 in a single hdd is so simple, don't you think all drives these days would already do it?

Also, if you'd notice that different drive capacities in different drive families within each hdd company use different numbers/sides of each platter. 20gb= 1 side 1 platter. 40gb = 2 sides 1 platter and so forth. If you implement raid 0, you lose the differentiation, which means less choices for consumers, which means you can't really convince a guy to spend an extra 80 bucks on double the capacity when you can barely get him to spend 40 bucks on 50% more capacity.

Then, there's the problem with cost. To implement dual head capability requires double the hardware to control it. Well, give or take. After that, once you've gotten two heads to double throughput, you need to double the bandwidth from the heads to the interface (ide, scsi, fibre) which requires even more expensive hardware and design. Think about RAM. Twice the performance (7 ns vs 3.5 ns) results in much higher costs. At that point, you'd probably be more interested in standard raid configurations. Why pay twice the amount for twice the speed when you can pay twice the amount for twice the speed AND twice the capacity?

My $0.01
 

Becks2k

Senior member
Oct 2, 2000
391
0
0
"m... Lemme think. If implementing raid 0 in a single hdd is so simple, don't you think all drives these days would already do it?"

If you look at the name of the topic "Why can't they...' I didn't say it was easy, I said it seems really easy and i wanted to know why, tahts why I asked.


"Then, there's the problem with cost. To implement dual head capability requires double the hardware to control it. Well, give or take. After that, once you've gotten two heads to double throughput, you need to double the bandwidth from the heads to the interface (ide, scsi, fibre) which requires even more expensive hardware and design"

Theres already 2 heads there. And the more bandwidth problem doesn't make much sence.... we're talking about ~80MB/sec thats nothing.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81


<< Also, if you'd notice that different drive capacities in different drive families within each hdd company use different numbers/sides of each platter. 20gb= 1 side 1 platter. 40gb = 2 sides 1 platter and so forth. If you implement raid 0, you lose the differentiation, which means less choices for consumers, which means you can't really convince a guy to spend an extra 80 bucks on double the capacity when you can barely get him to spend 40 bucks on 50% more capacity.
>>



So stripe across however many platters there are. that differentiates drives further, since fewer platters = lower throughput. or always stripe in pairs only if you want similar performance for the various drives.
 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
"Then, there's the problem with cost. To implement dual head capability requires double the hardware to control it. Well, give or take. After that, once you've gotten two heads to double throughput, you need to double the bandwidth from the heads to the interface (ide, scsi, fibre) which requires even more expensive hardware and design"

Theres already 2 heads there. And the more bandwidth problem doesn't make much sence.... we're talking about ~80MB/sec thats nothing.[/i] >>



Dual heads working on one data stream one head at a time. Quite a different problem than a single data stream split into two heads. It also doubles the problem of seeking data (gotta check two tables to find the right data) and writing data (split data, write both at once, check if written correctly, rinse, wash, repeat).
Imagine trying to split a data stream and then writing to the platter or getting two data streams and putting it together correctly. Try doing that with a simple program and you'll see the performance gains are really little or nonexistent. (In case you're wondering, SMP works on two different data streams) Today's technology focuses on working with very large (as compared to a bit) sized data chunks. Example: It takes far less time to load a text file into memory using 512kb chunks than to read it in byte by byte. I should know. That's how a classmate was able to beat out everyone else (except the prof) when given the programming assignment to read in a text file of the bible and count the number of instances of a certain word as fast as possible.
To be able to manipulate a data stream (which enters the heads serial fashion, I would think) at the bit level at speeds similar to today's raid arrays is a very difficult task. A PCI raid controller splits up data into large chunks (64kb, 128kb, whatnot) and sends them to each drive parallel fashion. What you're asking may become more feasible with serial ATA, but would still require extremely fast or massively parallel hardware. Just look at the PS2. 230 or so mhz of processor speed but something like 500mb/s or more of memory bandwidth. And this is using yesterday's technology.
To get, say, 80 mb/s transfer rate with a single controller you would need one that could run at least 640 mhz (80x 8) with memory bandwidth equal to processor speed and everything fitting inside 1/4 or so of the drive itself. The rest of the space going to the platters themselves. So imagine how hot does a 640+mhz processor run? Add to that 640+mhz of memory bandwidth (I don't think it's RAM, which rules out DDR and really jacks up the price at this speed) and considering 640 is only reading in data and not processing the head, location, and error-checking and you can see why a bit-level implementation like you're asking may be so difficult. You could try double the processors with half the speed and half the memory bandwidth, but then you get into cost considerations as you have to basically take what you see on your current hdd board and double it. Silicon real-estate is hella expensive and difficult to shrink to a fit.
It would probably be a lot easier using larger data chunks, at which point it would screw over a two-head idea pretty quickly. You'd be limited to reading, say, 256kb off the top and 256kb off the bottom in 512kb data chunks. Your 128kb autoexec file is now taking up four time's its required memory. Your FAT32 clusters of 64kb now require you to read four clusters at any given read and to write all four chunks anytime you write anything to the drive. SO, you would need to read in the existing clusters, find out which ones you need to write and which ones you don't, compile the data, then spit it onto a platter.
Quite frankly, I wouldn't devote too much funding to developing this type of drive except for specialized cases (aka, non-mainstream consumer who really doesn't give a damn about bandwidth in the first place) and even then I'm not even sure the costs would be justified.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
A few years back Conner developed a drive that could read/write from multiple heads at once. The heads did not move independently, it just had the ability to read and write from 2 heads at once. The company was subsequently bought out by Seagate and I believe they were the ones that actually released the drive. Problem was, it was so complex and the performance gains weren't that great so the technology was cancelled.
 

Turkey

Senior member
Jan 10, 2000
839
0
0
WRT the reliability issue, if there's a bad sector on a platter now, the drive just marks it bad and doesn't use it. But if it were raid 0 internally, you'd have to mark all corresponding sectors bad, so you'd lose 2x/4x/8x the storage from a single bad sector. I can't imagine that every platter a manufacturer produces is perfect, so drive capacities would suffer somewhat.

The proposal is really very simple... the entire interface to the drive stays the same. Even using a serial interface, you would just need a high speed serdes (gig ethernet anyone?). The internal processor would be much slower since the data coming from the serdes is 8 bits wide. An 80 Mb/s processor with an 8 bit interface to the rest of the world => minimum 80 Mhz custom processor. Coming off the platters, instead of a single serdes -> data processor (or data processor to single serdes), there would be multiple serdes -> merge (raid) processor -> data processor, or maybe the raid and data processors are built into the same chip. The rest of the drive and processor would be marginally different (throw away a mux, add a few transistors here and there, etc). For raid 0 at the platter level, the raid processor just isn't that complicated - a write is 1 8 bit input, N 8 bit outputs, just split them up properly, and a read is N 8 bit inputs, 1 8 bit output, just put them in the right order. This is especially true when the validation/verification steps that had to previously be done separately by both the raid processor and the drive processor can be combined. Plus, platter level raid doesn't have to worry about different latencies between multiple drives. So I don't think there's any cost problems that couldn't be overcome by the increased performance. And the tech wouldn't initially be put in the low cost drives, it'd be put in the premium drives, just like 7200 rpm and larger caches were.

My guess is that there's some EM effect that prevents simultaneous writing on opposite sides of a platter. I don't see any heat, power, or cost issue that couldn't be easily addressed.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
WRT to more effects on a bad sector... you could shrink the sector size to 1/4th its size. then you wouldn't have to wait for as much data before you start writing (only one sector worth, vs. 4) AND the lost amount of drive would not be any bigger.

Also... with a modern, 30 gig+ drive, who really cares if you lose 64k vs. 256k?
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81


<<
<< I think the main problem is thermal
If you write with both the top and bottom head at the same time they need to be in the exact same place in relation to the platter

why would it be different? the two heads are always doing EXACTLY the same things.
>>



But the point is, that the two heads on the same arm, won't be doing the same thing. If the drive is warming up, or even at operating temperature, one of the heads may be warmer than the other - maybe it's nearer the motor, or the drives mountings are warm because it's near the CPU, and this is where the problem lies.

The temperature difference between the two heads and their supporting arms means that they will have experienced different amounts of thermal expansion. With modern drives the data is so tightly packed that tiny amounts of differential thermal expansion will make such an linked system unworkable.

Then you have the problem of manufacture; the platters are low-level formatted (the cylinders and sectors are recorded) on a special servo-writer. Despite this there is some variability between platters. This doesn't matter because drives seek by counting the cylinders as the heads move over them. If you have a multi head recording system then the platters have to be perfectly aligned, both at time of recording, and at time of assembly into a stack. The stack then has to be perfectly perpendicular - current drives could potentially get away with 1 minute of arc of slant - such a multi-head drive would need tolerances of better than 1 second of arc.

Essentially, you cannot readily make a drive that reads from multiple platters using a single arm.

You could duplicate the heads, arms, seeking and signal processing hardware - but these are the most expensive components of the drive. You might as well have used a conventional RAID 0 array.