Of platters, heads and so on.

WildW

Senior member
Oct 3, 2008
984
20
81
evilpicard.com
I've noted in the past people tend to like drives with single platters for improved access times (I think?), and perhaps relatively good areal density - presumably compared to a same capacity hard disk with more platters. Over the weekend, while waiting for some file transfers between drives to complete, watching speeds wander up and down, I got to thinking about how data must be laid out on hard disks.

Say you have a hard disk with two platters - how is the data laid out between the platters, and how is it decided? Does Windows tell the drive exactly which sectors to put the data in, or does the hard disk manage that? And does Windows know anything about how many platters the disk has?

An idea came to mind on how to make hard drives perform much faster - an idea which may be how they work already, but since I don't have a clue I thought I'd put it out there. Inside a multi-platter drive you have all the heads on one actuator arm, hence they're all in the same position across each platter. The question is, why not have the drive firmware stripe the data across platters, using all the read/write heads simultaneously, in a similar fashion to RAID-0 across multiple drives. You would need the drive electronics to be able to cope with read/write with all the heads at once, but it would lift some of the physical limitations.

Maybe this is done already, but if it were, throughput would increase linearly with the number of platters in a hard drive, and everyone would want as many platters in their drive as possible - which doesn't seem to be the case?
 

Russwinters

Senior member
Jul 31, 2009
409
0
0
Due to the way that HDD work it is impossible for more then 1 head to be active at any given time. Furthermore the head can only read OR write at any given time, it cannot do both.

The data is recorded in a serpentine fashion typically. It depends on the manufacturer's design and firmware.

There is a TON going on in a hard drive that most people are not aware of, and don't really need to know about. It is only us recovery techs (and the engineers who design the drives of course) who really need to know about these things.


How many heads and platters a drive has doesn't really influence it's speed.

Overall drives with less heads and platters SHOULD be more reliable, as less parts means less chance of one of those parts to fail.

Drives with less platters and heads, but the same size as last generation indicates increase in areal density, which will of course have a positive impact on performance in most cases.

Regards,
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
The crux of the message is that it could be done (what the OP proposes) but only at the expense of undoing many of the optimizations that have already gone into pushing areal densities (and the performance that comes with it) to the point that they are at today.

You might end up with a faster product but certainly not at the $/GB pricepoints seen today, and not at the reliability of today's drives (as discussed in Russ' post).

Keep in mind that in the business world engineers are not tasked with creating performance or solving problems to the best their ability. They are tasked with creating performance at the least amount of expense (both development expense as well as least time-to-market).

Tell the engineers at Seagate they have 2x the development budget or that they have 2x the development timeline or that their resultant product can be 2x the original targeted production cost and you would no doubt be pleasantly surprised at how creative the products would become as the performance vector was all the more optimized (at the noted expense of time and money).
 

Russwinters

Senior member
Jul 31, 2009
409
0
0
well, to use all of the read heads at any given time the HDD would have to be completely redesigned from the ground up.


Everything is stored via magnetism, it is the flux reversals in the magnetic fields that are interpreted by the read element, and eventually converted into 1s and 0s. Keep in mind that it gets much more complicated then this. If I was to explain everything it would take me hours and hours to type it, and even then it wouldn't completely explain it.

Basically, the reason that only one head can be active is that the read element of a head is what tells the MCU it's current location, so you can't have two heads reporting their location at the same time because everything is being read out in an analog signal, not digital, and too much noise would be introduced from each head.

All of the data about the flux reversals has to travel through the preamplifier chip, then to the read channel which decodes the RLL and PRML encoding used to store data (basically the data that gets written to the hard drive is "scrambled" and is not the same as the binary/hex you are seeing in your software, and then it gets reinterpreted back later.)

That is the a brief explanation of the tip of the iceberg.

This would def need to go in the "highly technical" section. ha-ha.


I do give DR training (which goes in-depth about the anatomy and inner workings of the HDD) for my company, but it is pricey, and only worth it if you would like to enter a career in data recovery, or start up your own DR business.


Regards,
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
The crux of the message is that it could be done (what the OP proposes) but only at the expense of undoing many of the optimizations that have already gone into pushing areal densities (and the performance that comes with it) to the point that they are at today.

You might end up with a faster product but certainly not at the $/GB pricepoints seen today, and not at the reliability of today's drives (as discussed in Russ' post).

Keep in mind that in the business world engineers are not tasked with creating performance or solving problems to the best their ability. They are tasked with creating performance at the least amount of expense (both development expense as well as least time-to-market).

Tell the engineers at Seagate they have 2x the development budget or that they have 2x the development timeline or that their resultant product can be 2x the original targeted production cost and you would no doubt be pleasantly surprised at how creative the products would become as the performance vector was all the more optimized (at the noted expense of time and money).

Yep, on all points...
And while the idea of internal raid0 might sound ideal and simple it is actually not. There were tons of improvements to be made, cheaper improvements... drives like the velociraptor and the WD black series have those.

The problem with the "take as long as you want, with as much budget, and no regards to cost of final product" is that it will fail commercially. you have to keep on moving to stay in business
 

extra

Golden Member
Dec 18, 1999
1,947
7
81
well, to use all of the read heads at any given time the HDD would have to be completely redesigned from the ground up.


<snip>


Regards,

Really interesting and informative post, thanks! <3 Interesting as I never knew that both heads couldn't read/write at same time.
 

Zap

Elite Member
Oct 13, 1999
22,377
7
81
I've noted in the past people tend to like drives with single platters for improved access times (I think?), and perhaps relatively good areal density - presumably compared to a same capacity hard disk with more platters.

I'll take a pass on the other stuff as IDK, but I'll tackle this platter stuff.

Basically, higher platter density = higher transfer rates. Thus, if you were to choose between two drives of the same capacity and rotational speed, the rule-of-thumb is (if you know) to choose the drive with the fewest platters thus gaining the higher transfer rates.

Regarding single platter drives, the reason for choosing this over a single drive with more platters is that they are quieter and get less hot. Of course it is invalidated by using multiple drives if the capacity is required.
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
well, to use all of the read heads at any given time the HDD would have to be completely redesigned from the ground up.


Everything is stored via magnetism, it is the flux reversals in the magnetic fields that are interpreted by the read element, and eventually converted into 1s and 0s. Keep in mind that it gets much more complicated then this. If I was to explain everything it would take me hours and hours to type it, and even then it wouldn't completely explain it.

Basically, the reason that only one head can be active is that the read element of a head is what tells the MCU it's current location, so you can't have two heads reporting their location at the same time because everything is being read out in an analog signal, not digital, and too much noise would be introduced from each head.

All of the data about the flux reversals has to travel through the preamplifier chip, then to the read channel which decodes the RLL and PRML encoding used to store data (basically the data that gets written to the hard drive is "scrambled" and is not the same as the binary/hex you are seeing in your software, and then it gets reinterpreted back later.)

That is the a brief explanation of the tip of the iceberg.

This would def need to go in the "highly technical" section. ha-ha.


I do give DR training (which goes in-depth about the anatomy and inner workings of the HDD) for my company, but it is pricey, and only worth it if you would like to enter a career in data recovery, or start up your own DR business.


Regards,

There WERE hard drive designs using multiple heads (http://www.pcguide.com/ref/hdd/op/actMultiple-c.html) e.g. Chinook, but it was shown that such an approach was a) expensive and complicated, and b) generated too much heat. Given the complexity and cost, there is no reason to opt for such a design when RAID 0 is so much easier (and probably is more reliable). So yes it is possible, but completely impractical.

Google's experiences - large, cheap, consumer hard drive arrays (http://labs.google.com/papers/disk_failures.pdf) are superior to any engineered proprietary solution from a price/performance standpoint. Flash memory does stand to challenge this assumption though.
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I love the part where they list the effect of various smart attributes...

Spin Retries. Counts the number of retries when the drive is attempting to spin up. We did not register a single count within our entire population.
Thats right, there has never been a single case of a spin retry on any drive owned by google. wow.
 

Russwinters

Senior member
Jul 31, 2009
409
0
0
There WERE hard drive designs using multiple heads (http://www.pcguide.com/ref/hdd/op/actMultiple-c.html) e.g. Chinook, but it was shown that such an approach was a) expensive and complicated, and b) generated too much heat. Given the complexity and cost, there is no reason to opt for such a design when RAID 0 is so much easier (and probably is more reliable). So yes it is possible, but completely impractical.

Google's experiences - large, cheap, consumer hard drive arrays (http://labs.google.com/papers/disk_failures.pdf) are superior to any engineered proprietary solution from a price/performance standpoint. Flash memory does stand to challenge this assumption though.


I should correct myself; sure it isn't "impossible" because I don't believe anything is 100% impossible because there is so much in the universe that we still do not understand.

The chinook drive was created in a time where areal density was not very high.

Fast forward to today, and a multiple actuator design will be literally as close to impossible as you can get to working, and of course if it did work it would probably be VERY expensive.

The heat they are talking about is the heat contractions that platters make when they are in operation (they do swell slightly).

The servo mechanisms in use cannot account for too much of a change, and this would be very hard to calibrate for more then on actuator.


Regards,
 

SonicIce

Diamond Member
Apr 12, 2004
4,771
0
76
I wish it was easy to tell how many platters are on a drive. They seem to hide the specs. I was just interested in the increased reliability and coolness and quietness. Whats the largest single platter drive?
 

Russwinters

Senior member
Jul 31, 2009
409
0
0
Right now 500GB is the largest single platter drive.


There are some tricks to finding out how many heads/platters there are on some drive.


Like old Maxtors (before Seagate acquisition) The second digit of serial number was always a number, and that number was the number of heads in the drive.


I haven't discovered and tricks like this for any other manufacturers, but there are tools that allow you to view the ROM/Firmware contents and can easily see how many heads/platters are in a drive, but of course these tools are specialist equipment, which is extremely expensive.

The tricky part about head/platter config is manufacturers may make major overhauls to the internal design of a hard drive, but keep the same model number.

The latest example of this is the WD5000AAKS series drive, which originally came out as a 2 platter 4 head drive, and is now a 1 platter 2 head drive with a blue sticker (since they have moved to the whole blue/green/black sticker scheme)
 

WildW

Senior member
Oct 3, 2008
984
20
81
evilpicard.com
Thanks for some interesting posts guys, I feel as though I may have learned something. I knew the inner workings of hard disks were beyond my wildest dreams, but not how far beyond.
 

Russwinters

Senior member
Jul 31, 2009
409
0
0
Learning the internals doesn't take all that long if you are working with HDD every day.

When I started data recovery I didn't have a full understanding of the inner workings (there are even some things today that I still do not fully understand.)

Just need to spend some time to learn, but honestly it isn't worth your time to learn unless you would like to use the knowledge to benefit yourself; it's one of those subjects that takes a lot of your time, so I don't recommend it unless you seek a career using the knowledge.


Regards,
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
An idea came to mind on how to make hard drives perform much faster - an idea which may be how they work already, but since I don't have a clue I thought I'd put it out there. Inside a multi-platter drive you have all the heads on one actuator arm, hence they're all in the same position across each platter. The question is, why not have the drive firmware stripe the data across platters, using all the read/write heads simultaneously, in a similar fashion to RAID-0 across multiple drives. You would need the drive electronics to be able to cope with read/write with all the heads at once, but it would lift some of the physical limitations.
This idea has been thought of by many people many times over the decades. It cannot be done because the the tracks on the drives are not perfectly aligned with each other. So position head #1 on it's track does not guarantee that head #2 is on its track.

Personally, I think it could be done with some effort. There would need to be a new second level actuator on the head itself to perform fine alignments. Perhaps a very small piezo electronic actuator could do the job.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
The biggest problem with using them simultaneously is that you need a separate actuator to move each head...

Just get a second set of heads on the other side of the platter wreaked all kinds of havoc AND was extremely expensive and difficult to do... by making each head on the same side move independently you have a bigger task in term of actually causing each armature to move without interfering with each other, deal with the issues of balance and aerodynamics, and significantly more complicated controller.

Getting two actuators on a disk (at opposite sides) was very costly, and didn't sell... ended up a failure.

Getting two actutators on the same side (aka, moving two heads independantly) would be even more costly and problematic.
And it would be an even tougher sell now. Google has went and shown that bigger arrays of cheaper consumer drives are better, SSDs have taken over the high performance market... if anything things are going the other direction... with WD now downgrading their drives from 7200rpms to 5400rpms (their green power drives)... Speed just doesn't matter for a spindle drive anymore. Cost and capacity very much does.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
The biggest problem with using them simultaneously is that you need a separate actuator to move each head...

Just having a second set of heads on the other side of the platter wreaked all kinds of havoc AND was extremely expensive and difficult to do... by making each head on the same side move independently you have a bigger task in term of actually causing each armature to move without interfering with each other, deal with the issues of balance and aerodynamics, and significantly more complicated controller.

Getting two actuators on a disk (at opposite sides) was very costly, and didn't sell... ended up a failure.

Getting two actutators on the same side (aka, moving two heads independantly) would be even more costly and problematic.
And it would be an even tougher sell now. Google has went and shown that bigger arrays of cheaper consumer drives are better, SSDs have taken over the high performance market... if anything things are going the other direction... with WD now downgrading their drives from 7200rpms to 5400rpms (their green power drives)... Speed just doesn't matter for a spindle drive anymore. Cost and capacity very much does.
 
Last edited:

retnuh

Member
Mar 3, 2004
33
6
71
Say you have a hard disk with two platters - how is the data laid out between the platters, and how is it decided? Does Windows tell the drive exactly which sectors to put the data in, or does the hard disk manage that? And does Windows know anything about how many platters the disk has?

It used to be cylinders * heads * sectors * 512 bytes (usually) = drive size, and you'd have to configure this in your bios (C/H/S), I didn't code any disk level stuff then, but I believe the OS had to know about this or at least the PATA driver, I'd have to lookup the FAT16 specs but if I had to guess it did its own LBA type thing rather than store C/H/S values in the FAT table. LBA mode or Logical Block Addressing fixed this by just reporting a linear 1 to (size of drive / 512bytes) range of addresses to read/write to. This happened when larger capacity drives started being more common place. The current change over to 4k sectors is happening for the same reasons, multi TB drives instead of multi GB drives of that time. Not sure if you remember drive size limits on UltraATA controllers, basically they ran out of bits for the upper range, ie used a 36bit or 38bit number instead of a 48bit one.

Either way, since LBA became the norm, the OS doesn't/can't know how the drive is internally laid out, it just knows which block of addresses a partition starts and ends at and says get me data at position 52212353.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
the new WD drives use two servo's for multiple platters and iirc dual controllers so you can increase iops. probably not the low end carp.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
they should throw a sandforce controller on there with data deduplication and compression to boost throughput even more (raptors and high-margin enterprise drives)
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
the new WD drives use two servo's for multiple platters and iirc dual controllers so you can increase iops. probably not the low end carp.

the new WD drives use one big servo to move all to move all the heads at the base of the armature, and a little servo for fine control at the tip of the armature.
this is completely different.