Avoiding the PCI bottleneck...

EddNog

Senior member
Oct 25, 1999
227
0
0
Question! Does the southbridge ATA controller bypass the PCI bus, thereby avoiding the PCI bottleneck of <133MB/s? If this is true, then the dual channel southbridge ATA controller on motherboards should be able up whip out the full 266MB/s of both channels combined at the same time at peak throughput! Anybody in the solid know, please help me with this, as I am quite curious. If it is, then I may buy a cheap ATA controller for PCI and plug my optical drives into it, then use the southbridge controller for a dual-channel WinXP softRAID of four hard drives. I'll use the mobo's included SATA RAID controller for the boot drive.

-Ed
 

Lord Evermore

Diamond Member
Oct 10, 1999
9,558
0
76
Actually it does bypass the PCI bus in most modern chipsets. Some even allow you to specify whether to run it over the PCI bus or directly to the southbridge with a BIOS option.

However you are't likely to get anything like 266MBps of transfer from two drives. The best burst transfer rates of a single drive are maybe 80 to 90MBps, which occurs for only a fraction of a second before the onboard cache is emptied, then you fall back to the sustained transfer rate of the drives, at best 45MBps from each drive. 133MBps is also only a theoretical peak transfer rate for both the PCI bus and an ATA133 controller. It comes out to something like 90 to 100MBps in real use for the PCI bus.

Four drives will not offer significantly better performance than 2 drives in IDE RAID either. Only one drive on each cable can be actively transferring data (in or out) at a time; the OS can send commands to the other drive, but still has to wait for that drive to have control of the bus before receiving or sending any data to it. So if you have four drives, and are striping the data, then the OS will first send the data for the two masters for example, then send to the slaves, then the masters, then the slaves. The data throughput will in that case still be the same as if there were only two drives total, since only two drives are sending or receiving at one time. A mirror array will be even worse performance, using software RAID, since the OS has to send the data twice, once for each mirror set, rather than sending it once to a hardware RAID controller which then duplicates it to the drives.
 

mechBgon

Super Moderator<br>Elite Member
Oct 31, 1999
30,699
1
0
If your storage capacity needs aren't too high, consider a Cheetah 15k.3. With sustained transfer rates of 60-75Mb/sec, it can sprint... but with sub-4ms seeks and 0.1ms track-to-track, it can also corner. Unless you want to transfer enormous contiguous files back and forth, I think the quick seeks are going to be of greater benefit than a slight edge in straight transfer rate. My storage needs are met by the smallest of the Cheetahs, an 18Gb unit, but I realize not everyone's in that boat.

On an nForce 220D board, incidentally, my Ultra160 card gets peak throughput to the motherboard of over 120Mb/sec (Adaptec SCSIBench, Cheetah X15-36LP, same-sector read at 128kb blocksize). If you want a board with high PCI efficiency, nForce looks to be it, with that 400Mb/sec full-duplex Hypertransport link leaving plenty of room for other stuff to talk alongside the PCI bus. Interestingly enough, Intel's i850 and some flavors of i845 have a little glitch that effectively caps PCI bandwidth at 90Mb/sec; see item 5 here if that's of interest.
 

Viper96720

Diamond Member
Jul 15, 2002
4,390
0
0
Originally posted by: mechBgon
If your storage capacity needs aren't too high, consider a Cheetah 15k.3. With sustained transfer rates of 60-75Mb/sec, it can sprint... but with sub-4ms seeks and 0.1ms track-to-track, it can also corner. Unless you want to transfer enormous contiguous files back and forth, I think the quick seeks are going to be of greater benefit than a slight edge in straight transfer rate. My storage needs are met by the smallest of the Cheetahs, an 18Gb unit, but I realize not everyone's in that boat.

On an nForce 220D board, incidentally, my Ultra160 card gets peak throughput to the motherboard of over 120Mb/sec (Adaptec SCSIBench, Cheetah X15-36LP, same-sector read at 128kb blocksize). If you want a board with high PCI efficiency, nForce looks to be it, with that 400Mb/sec full-duplex Hypertransport link leaving plenty of room for other stuff to talk alongside the PCI bus. Interestingly enough, Intel's i850 and some flavors of i845 have a little glitch that effectively caps PCI bandwidth at 90Mb/sec; see item 5 here if that's of interest.

Could that be a reason why amd beats intel in some video card benchmarks?

 

mechBgon

Super Moderator<br>Elite Member
Oct 31, 1999
30,699
1
0
I wouldn't think it would be the primary reason, since the AGP card is on its own separate bus to the northbridge, meaning that neither Intel's bug nor nForce's extreme north-to-south bandwidth should have much impact on video performance.