Why is SATA faster than PATA?

AznMaverick

Platinum Member
Apr 4, 2001
2,776
0
0
I always thought if you transmit data in parallel it would be faster than serial, which is why i'd rather have a parallel device print rather than a serial device...or am i getting things mixed up? please explain someone...

Thanks
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
First of all: It isn't. Faster interface rates do nothing for an IDE drive's actual performance, which is WAY lower than the 100, 133 or 150 MB/s that current interfaces allow.

Then, of course parallelizing several data lines lets you transmit more data. But so does making the individual lines transmit faster. Doing the latter wins you real estate - in chip pin count, mainboard trace routing, connector sizes on both ends, and cable width. This is what the move to SATA is all about. Not speed. Same for the move from LPT to USB printers.
 

GoHAnSoN

Senior member
Mar 21, 2001
732
0
0
for parallel , you really need to make sure all the data lines travel and reach across to the other side at the same time. So, there is a limit for this.
for serial, you just need to push data as fast as you can receive.
 

nyarrgh

Member
Jan 6, 2001
112
0
71
because SATA is newer?

I think Serial is easier to push up in speed since you won't have to deal with syncing multiple signals. However, Parallel does have the advantage of pushing more than one signal. That is why PCI Express is faster than PCI, but PCI Express x 16 is faster than PCI Express X1.


sorry if i'm not making any sense...
 

Devistater

Diamond Member
Sep 9, 2001
3,180
0
0
Also notice that cable length is important. Because for parallel as mentioned above you need to keep in sync, the official IDE cable length limitation is 18"
Yes I know a lot of companies sell rounded cables of 24" or so, this is longer than standard and carries some risk :)

In SATA you can have longer cables since you dont have to sync the data in parallel any longer.
 

rgwalt

Diamond Member
Apr 22, 2000
7,393
0
0
Originally posted by: Peter
First of all: It isn't. Faster interface rates do nothing for an IDE drive's actual performance, which is WAY lower than the 100, 133 or 150 MB/s that current interfaces allow.

Then, of course parallelizing several data lines lets you transmit more data. But so does making the individual lines transmit faster. Doing the latter wins you real estate - in chip pin count, mainboard trace routing, connector sizes on both ends, and cable width. This is what the move to SATA is all about. Not speed. Same for the move from LPT to USB printers.

So, let me see if I'm interpreting this correctly... Originally parallel architecture was the standard as it allowed us to transmit more data simultaneously at lower speeds. Transfer rates were low enough to necessitate a parallel architecture to increase throughput. As we gained the ability to transfer data faster, the necessity to transmit more data in parallel diminished. This allowed us to move to a serial architecture. While it isn't faster, there are many other benefits, and due to the transfer rates being so high, there is not a significant speed reduction.

Please someone correct me if I'm wrong.

R
 

natto fire

Diamond Member
Jan 4, 2000
7,117
10
76
Originally posted by: rgwalt
Originally posted by: Peter
First of all: It isn't. Faster interface rates do nothing for an IDE drive's actual performance, which is WAY lower than the 100, 133 or 150 MB/s that current interfaces allow.

Then, of course parallelizing several data lines lets you transmit more data. But so does making the individual lines transmit faster. Doing the latter wins you real estate - in chip pin count, mainboard trace routing, connector sizes on both ends, and cable width. This is what the move to SATA is all about. Not speed. Same for the move from LPT to USB printers.

So, let me see if I'm interpreting this correctly... Originally parallel architecture was the standard as it allowed us to transmit more data simultaneously at lower speeds. Transfer rates were low enough to necessitate a parallel architecture to increase throughput. As we gained the ability to transfer data faster, the necessity to transmit more data in parallel diminished. This allowed us to move to a serial architecture. While it isn't faster, there are many other benefits, and due to the transfer rates being so high, there is not a significant speed reduction.

Please someone correct me if I'm wrong.

R

While SATA will eventually be faster, the main issue at the moment is the drives themselves. SATAII is supposed to allow even more throughput but as mentioned in a post above, HDDs usually aren't saturating IDE bus.

I like the smaller cable because it is a lot easier to keep your case clean and still have a few drives :).
 

zogg

Senior member
Dec 13, 1999
960
0
0
Serial is faster because instead of having 32 or 64 parallel data line send bit by bit - you have two serial lines one in - one out but the data is organized into packets similar to networking so you are sending packets or datagram?s of data 1 packet at time, hence a faster data rate .. The same thing goes with the new PCI-X slots that are coming out. They work the same way, packets, and datagram?s. Think about it. Now that we have motherboards with dual gigabit networking, 7.1 onboard audio, plus people running maybe four drives or more in a raid configuration, not to mention video streaming and you name it all at the same time would choke or bottle-neck on a 33 MHz pci bus. That is why the mobo manufacturers are doing away with the pci bus and going with pcix. Right now mobos have both so you do not have to throw away your pci stuff that is still good, but eventually pci will go the way of the isa bus.

Cheers
 

Spencer278

Diamond Member
Oct 11, 2002
3,637
0
0
Parallel interface can always be made faster then serial interfaces. The main reason to replace parallel with serial is cost. Sure serial is faster when it comes out but that is only because they break backwards compatiblity. If you where to create a completely new parallel standard for hard drives you could easly get speeds much higher then SATA but there simple is no reason to. You could do the same with SATA as PCI-X and add more channels to make SATA-16 or something.
 

imgod2u

Senior member
Sep 16, 2000
993
0
0
A long time ago, parallel was preferred because the limitations to speed were in the copper wires themselves. You simply couldn't push a very fast signal through, so the need to put multiple signals together was used. However, as you know, technology changes. Wiring/routing has advanced significantly to the point where the individual wire delay can be minimal (very fast). The main problem now is skew, crosstalk, noise, etc. These problems weren't very big when we were running at 1 MHz signals but they're a huge problem at the GHz range over a long distance (across the motherboard). So, as conditions change, methods must change. So the solution was obvious, use less wires. Instead of having 32 wires running at 100MHz - 133MHz, we can have 2 wires running at 2.5 GHz - 6 GHz. And the ability to push serial link speeds isn't difficult whereas the ability to push parallel link speeds increases exponentially. So, for right now, serial seems to be outpacing parallel.
Another advantage is granularity. With a parallel link, you need to organize your data and send them in chunks from one point to another. Parallel links are expensive (many wires) so you can't have many of these, so you'd need controllers to share these wires which leads to all sorts of scheduling and resource conflicts, etc. With serial, the links are so cheap that there's no need to share them. Each device gets their own link. For areas where large bandwidth is needed, simply put many serial links together. They don't interfere with each other as they're independent. Synchronization is done at the receiving end as opposed to having to be timed in the wires themselves. It does require a more sophisticated receiving end than plain parallel, but the transfer rates you can get are much greater.
So in short, serial is more flexible, cheaper and ultimately easier to improve upon in the future.
 

Leper Messiah

Banned
Dec 13, 2004
7,973
8
0
Transfer speeds mean nothing. Its the Platter speed (7200rpm, 10k rpm 15k rpm, etc) and the seek time that make your hdd faster. That's why a raptor is so much faster than that 3gb caviar thats been hanging around for the last 7 years. A normal 7200 rpm hdd has a seek time of like 8-9ms, and a raptor a seek time of around 4ms. ATA/100, 133, etc, are just theoretical maxium burst trasnfer rates. You'll never see those speeds, just like your GB ethernet won't transfer at 125 MB/s. This is also why RAID0 is basically useless.
 

zhena

Senior member
Feb 26, 2000
587
0
0
L3p3rM355i4h said:
Bunch of bs....

What you probably ment to say is the higher rpm speed (i.e. lower seek time) makes your hdd more responsive, and i'll even say much more at that. Albeit, transfer speeds mean a hell of a lot depending on what you are using your system for. In fact the 15k rpm drivers sometimes have lower transfer rates than 10k rpm drivers -- because the 15k drives have small platters (i.e. 15gb/platter or so ) while 10k drivers now days can have huge platters (60gb/platter) and the bigger the platter the more data is can be read per spin.

A new 5400 rpm drive will be tens of times faster than the first generation 7200 rpm drives, because a new drive would have 60gb per platter while first gen 7200 rpm drives had 5gb/platter or so. But the 7200rpm drive will still probably have faster seek times.

Raid0 is not useless at all. You've seen one to many benchmarks on anandtech showing that raid0 gives no benifits to desktop systems. Ever ask your self why pretty much every drive in those benchmark is with in a couple of points of the slowerst one? Its because those benchmarks are cpu dependent more than anything else.

Try a benchmark where you copy 50gb from one raid0 array to another, vs. from one single drive to another.

 

Leper Messiah

Banned
Dec 13, 2004
7,973
8
0
you've never specified what you're basing this on. I'm talking desktop usage, not massive server ops with big-ass RAID arrays. and what exactly does repsonsive mean? IMHO much the same thing as faster.

you exagerate the ammount of speed a 1st 7.2k drive vs. a modern 5.4k drive. If what you say is true, why don't they make a huge ass platter spinning @ 4.2k w/100gb+ data density? If 15k rpm drives weren't faster, there wouldn't be a market for them.

And your supposed benchmark, is it single file, or a large multi-filed directory? From the benches I've seen, RAID0 gives a 25% or so increase in pure disc acesses, but only translates into a 3% or so increase in disc acess speed.

Actually, I doubt that in real-world tests that anyone would see a performance increase by switching from a 8MB cache 7.2k PATA, to a SATA of the same specs.

Saying SATA is faster than PATA is like saying AGP 8x is faster than AGP 4x or that a 6800u PCI-E is faster than the AGP version. Its just having future technology to grow into.
 

sao123

Lifer
May 27, 2002
12,653
205
106
A long time ago, parallel was preferred because the limitations to speed were in the copper wires themselves. You simply couldn't push a very fast signal through, so the need to put multiple signals together was used. However, as you know, technology changes. Wiring/routing has advanced significantly to the point where the individual wire delay can be minimal (very fast). The main problem now is skew, crosstalk, noise, etc. These problems weren't very big when we were running at 1 MHz signals but they're a huge problem at the GHz range over a long distance (across the motherboard). So, as conditions change, methods must change. So the solution was obvious, use less wires. Instead of having 32 wires running at 100MHz - 133MHz, we can have 2 wires running at 2.5 GHz - 6 GHz. And the ability to push serial link speeds isn't difficult whereas the ability to push parallel link speeds increases exponentially. So, for right now, serial seems to be outpacing parallel.
this is probably the best explaination so far.


Consider the fact that hard drives are following in the same path as I/O porting.
First the serial port. Limited by Slow Transmission Rate, 1 Line.

Then the parallel port. Combine Multiple Serial Connections into 1 port.
Same slow tranmission rate, but multiple quantities transmit at a time.

Then back to serial. USB. Successfully figured out a way to transmit single entities much faster.

Next (future) some sort of parallel bus using USB technology as a cornerstone.

Computer technology follows this cycle every generation or so.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
Originally posted by: L3p3rM355i4h
you've never specified what you're basing this on. I'm talking desktop usage, not massive server ops with big-ass RAID arrays. and what exactly does repsonsive mean? IMHO much the same thing as faster.
Specifically, there are two different main factors here, STR (transfer rate from the platter), and access time, which is a combined factor of the track-to-track seek time (head-actuator delay), and rotational latency (which directly a factor of the RPM that a drive is spinning at).

Originally posted by: L3p3rM355i4h
you exagerate the ammount of speed a 1st 7.2k drive vs. a modern 5.4k drive. If what you say is true, why don't they make a huge ass platter spinning @ 4.2k w/100gb+ data density?
Ever heard of a Quantum Bigfoot drive? (I don't think they ever released a 100GB model though. They stopped making them a while ago, AFAIK. But they are a fairly recent example of making slower-spinning, but larger-platter drives.)

Originally posted by: L3p3rM355i4h
If 15k rpm drives weren't faster, there wouldn't be a market for them.
They are faster, in terms of lower rotational latency and thus lower access time. (As well, they are usually paired with better head-actuator mechanics too, further contributing to lower access times.)

They are not necessarily faster in terms of STR - but most applications are more access-time-limited than STR-limited, especially in server-type applications with I/O requests that most closely resemble random reads.

Originally posted by: L3p3rM355i4h
Actually, I doubt that in real-world tests that anyone would see a performance increase by switching from a 8MB cache 7.2k PATA, to a SATA of the same specs.
That's because, for the most part, the mechanics and RPMs of the drives available in both PATA and SATA models are identical, and in some cases (with first-gen SATA models), they were actually translating between SATA and PATA interfaces using a bridge chip, so SATA drives couldn't be any faster, and were often slower, than PATA drives of the same generation.

That's finally set to change now that most system chipsets include integrated SATA interfaces and SATA drives are implementing features like NCQ that PATA drives don't have. I'm still going to hold off until SATA-2 because of cabling/signal-noise and power issues though. Plus, PATA IDE drive prices are still a bit lower, due to misperceptions that they are "old tech". (Their is a supply surplus in the market too right now.)
 
Sep 3, 2004
28
0
0
Actually, I believe the reason that parallel is the old standard in connectivity is money. Once upon a time in the distant past (say 5 to 10 years ago) making silicon go fast was really, really expensive. Lets demonstrate: ATA 33 (yeah, old) with 32 bit signaling (plus eight other wires for other stuff) pushing 33 MB/second requires a frequency of over 8MHz. Yeah, not so very fast you say, but lets try that with one bit totally serial signaling. Serial ATA 33 now requires 256MHz signaling. When the IDE standard was introduced processors didn't run that fast let alone cheap mobo components. You had to go parallel because silicon that was capable of fast signaling was expensive. Now SATA 150 is cheap because making silicon do even 1GHz signalling is not hard at all. Now it's the cabling that is the most expensive part so it's better to go with an interface that requires few wires and fast silicon like SATA.
 

imgod2u

Senior member
Sep 16, 2000
993
0
0
Originally posted by: sao123
A long time ago, parallel was preferred because the limitations to speed were in the copper wires themselves. You simply couldn't push a very fast signal through, so the need to put multiple signals together was used. However, as you know, technology changes. Wiring/routing has advanced significantly to the point where the individual wire delay can be minimal (very fast). The main problem now is skew, crosstalk, noise, etc. These problems weren't very big when we were running at 1 MHz signals but they're a huge problem at the GHz range over a long distance (across the motherboard). So, as conditions change, methods must change. So the solution was obvious, use less wires. Instead of having 32 wires running at 100MHz - 133MHz, we can have 2 wires running at 2.5 GHz - 6 GHz. And the ability to push serial link speeds isn't difficult whereas the ability to push parallel link speeds increases exponentially. So, for right now, serial seems to be outpacing parallel.
this is probably the best explaination so far.


Consider the fact that hard drives are following in the same path as I/O porting.
First the serial port. Limited by Slow Transmission Rate, 1 Line.

Then the parallel port. Combine Multiple Serial Connections into 1 port.
Same slow tranmission rate, but multiple quantities transmit at a time.

Then back to serial. USB. Successfully figured out a way to transmit single entities much faster.

Next (future) some sort of parallel bus using USB technology as a cornerstone.

Computer technology follows this cycle every generation or so.

I actually think the future is a "loosely" parallel solution. I.e. putting multiple serial links together but still maintaining them as serial. What I find interesting is how i/o and wire signaling seem to follow analogously with MPU design methods.
A while back, the push was for wider and wider processors. Just keep increasing the number of execution units (486 to Pentium 2). Then it was a massive hike in operational frequency. And now, we've gone to multithreads. Narrow, high-speed thread processors doing things in parallel (ala Northwood, Niagra or Power5). "Loosely" parallel if you will. So I don't think it's cyclic more as the middle ground between two extremes. We've gone to both ends, now we're settling down in the middle.
 

The J

Senior member
Aug 30, 2004
755
0
76
There are two reason I can think of as to why SATA is faster than PATA (the interface, not the drives):

1. Crosstalk: When we moved from ATA33 to ATA66 and above, we had to use 80-pin IDE cables instead of the old 40-pin cables. Because more signals and stronger signals were being sent through the wires, you would get crosstalk, which means that the signal from one wire would interfere with that in the wires next to it. This corrupts your data. Putting a "grounding wire" between each of the 40 data wires (or whatever each one does) means there is a buffer between the wires so crosstalk is reduced. I'm guessing that there would be too much crosstalk on the wires now if we went to ATA150 or something like that. SATA uses only one data wire (serial = one bit at a time) so there's no other signals for that wire to interfere with, meaning no crosstalk, meaning faster data transfer.

2. Skewing: Not all wire is created equal. Because of this, not all data gets to the hard drive at the same time. If you send 32 bits of data to a hard drive, some of the bits will get there before the rest. This is called "skewing." The hard drive has to wait for the signal in each wire to get there before it can begin to write. With SATA, as soon as a bit gets there, the drive can get writing.

-Edited. Thanks to Peter for the correction.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
All great, except for the last sentence. Long gone are the days when the data flowing through the cable went straight to the recording heads ;)
 

The J

Senior member
Aug 30, 2004
755
0
76
Thanks, Peter. I corrected my post a moment ago. You said "Long gone are the days...", which seems to imply that it was once possible to send data right to the heads. Would this be possible to do again with SATA? This could cut out the middle-man that is the interface chip.

Though the chip would still need to be there to receive commands and take advantage of Command Queueing which will be showing up (or already has, I'm not sure), wouldn't it?
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
Originally posted by: Peter
All great, except for the last sentence. Long gone are the days when the data flowing through the cable went straight to the recording heads ;)

You mean the 20MB Seagate ST-225 in my other rig connected to an ST-506 8-bit ISA MFM controller card, with both digital control and analog signal cables, is obsolete? Say it ain't so! :( :( :(

;)

Ahh, for the days of true low-level HD formatting. Manually entering defect lists and all. You were really bleeding-edge those days, if you had a fast enough machine and dared to format your HDs with a 1:1 interleave factor!

I think it was "g C000:0008" in DEBUG, to enter the controllers BIOS, wasn't it? (LOL. MS still has a KB article about it.)

Edit: That gives me some interesting ideas.. I wonder what range of analog signals can be stored on an older MFM HD, if you bypassed using the analog cables for data-storage, and instead used them for recording something like audio, or maybe low-bandwidth video data? Hmm.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
ST225? I still got one of those, and I can actually still use it (because I also happen to own Adaptec's A4000 SCSI-to-MFM controller board). I also own a functional 5-MByte 5.25" full height (that's two slots) MFM HDD ...

Segment and offset to jump into from Debug, for the HDD setup UI in PC-XT, depends on your particular setup, i.e. what segment you jumpered the controller board to put its ROM in. C000 segment it certainly wasn't, since the VGA BIOS is occupying that space.

PC-AT moved the setup items for AT disk controllers, discrete or IDE, into the main system BIOS. If you forgot your drive parameters, you'd still be busted though.

Remember ESDI controllers? Those always had to be debug'd into, even on the AT ...


The J, no, over IDE or SATA you're actually talking to the controller board that once was an ISA card. No direct access to the drive's guts.
 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
Originally posted by: Darthan
Actually, I believe the reason that parallel is the old standard in connectivity is money. Once upon a time in the distant past (say 5 to 10 years ago) making silicon go fast was really, really expensive. Lets demonstrate: ATA 33 (yeah, old) with 32 bit signaling (plus eight other wires for other stuff) pushing 33 MB/second requires a frequency of over 8MHz. Yeah, not so very fast you say, but lets try that with one bit totally serial signaling. Serial ATA 33 now requires 256MHz signaling. When the IDE standard was introduced processors didn't run that fast let alone cheap mobo components. You had to go parallel because silicon that was capable of fast signaling was expensive. Now SATA 150 is cheap because making silicon do even 1GHz signalling is not hard at all. Now it's the cabling that is the most expensive part so it's better to go with an interface that requires few wires and fast silicon like SATA.

I just wanted to say that "cheap motherboard components" that have very simple function can be made to run much faster than a general usage complex processor. (not that this would have happened, as long as I can remember).
Anyway, there was a connectivity standard that specified something like 50 pairs of copper wires (one current IN and one current OUT). It was the only way to have 500 MB/s transfer speed in the times when Ethernet (10 Mb/s) was just a project.
Anyway, another thing which I don't find here is the use of differential signals (instead of one voltage front that is easy victim to crosstalk/radio interference/..., they use two wires in a closed loop - the current in flowing on one and returning on the other). The signal is much more resistant to any kind of interferences.

Is differential signaling in SATA? I think so, but I am not very sure
Calin
 
Sep 3, 2004
28
0
0
Calin, you're right that simple silicon like motherboard components can be made to run faster more easily. Witness the Cell processor, rumored to run at up to 4.6GHz or the "dual" arithmetic units of a P4 which are really just one arithmetic unit running at twice the processor speed. Where you're wrong is in the cheap part. Mobo components need to cost no more than a couple of dollars apiece and that means they need to use high efficiency cheap slightly older generation fabbing technology. Microchips can be improved in three general categories, complexity, cost, and speed. Pick any one (well, alright pick some balance of the three).

The reason ATA 66, 100, and 133 were possible is that silicon has gotten cheaper and faster and so it was possible to build chips that could handle the faster speeds.