• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Serial ATA

burek

Member
I was just browsing Akiba and I came accross some pictures of Maxtor HD's using Serial ATA 1 2

One is operational with the IDE RAID card of Serial ATA correspondence of the 3ware make which becomes beginning release with the ?A?L?o, really 3 being able to connect the IDE HDD of the Maxtor make. However as for the HDD with the standard IDE HDD, those which are accustomed to taking the conversion card of the Serial ATA. The interface cable is not the flat cable which has former width, seems like the LAN cable to be thin it has become the compact cable which does not have width. This card the number of ports is as many as 12, by the fact that the cable and the connector become small, also may become rather easy to be able to connect the mass HDD. In addition, with the same corner, also the 3ware make card which 12 ports it has former IDE interface has been displayed.
 
Great information, but I wonder if a SATA adapter will increase performance if it was used on a regular ATA drive.
 
Originally posted by: Utterman
Great information, but I wonder if a SATA adapter will increase performance if it was used on a regular ATA drive.

well it depends on whether its the hard drive or the motherboard thats holding back performance. id put my money on the motherboard.
 
Not sure what you mean by the motherboard holding back performance. If it's an ATA33 motherboard controller and an ATA/133 hard drive, then yes, it's holding it back, but if you're using an ATA/133 controller on an ATA/133 hard drive, then there should be no performance difference at all as far as a single drive is concerned. The adapter can't make the drive's electronics act at anything higher than the ATA/133 or 100 or 66 they were designed for, so the 150MBps of SerialATA will go unused on any drive which is not natively SerialATA, but it will allow you to use any higher speed drives that you have if you have a motherboard that doesn't take advantage of that, just as any other high-speed IDE adapter card would do.

Of course, with the consumer PCI bus, it's all a moot point anyway, since PCI only has a 133MBps transfer rate, which in many cases is carrying a network card, sound card, maybe a TV card, SCSI card for CDROM's, all of which are taking up a bit of that scarce bandwidth. SerialATA will be of limited value as far as performance even with native drives until PCI-X becomes available and common, or until 66MHz/64-bit PCI slots become common on consumer boards (unless of course the SerialATA controller uses a large cache to allow high bursts to the drive despite lower bandwidth to the chipset).

It's only limited as far as burst speeds due to this of course, since no hard drive can stream data nearly fast enough to make even ATA/66 choke, but without the increased burst rates SerialATA provides little to gain as far as performance. The few "pseudo-SCSI" features won't make much difference with just a single drive, probably not much even with a couple of drives.

The primary advantage will be for a long time the simple fact of the smaller cables, which is looking to be quite enough of an advantage to most people, but apparently not enough of one to have brought the products to market even though the spec has been around for years now.
 
Lookin' sweet. Cabling reduced, higher bandwidth. Now we just need faster motherboards and drives.

For motherboards, using PCI-X is one solution. Having a separate bus for EIDE devices could be another. Is it a requirement that only one bus (PCI) exist on a motherboard? Could there not be a PCI/PCI-X bus working the peripherals while another concurrent bus is dedicated to EIDE devices only? Maybe the AGP connector could even exist separately, rather than as an extension to the PCI bus, as it is now.

On the drive side of the equation I think that we're hitting a wall as far as mechanical devices go. We can increase the densities of the platters, the rotational speed of the drive, or the number of heads, but in the end I think we're beating a dead horse here. It seems to me that static forms of memory would be more scalable in the future. Whether that is a form of flash, store and read of quantum states, or something even less well known...
 
AGP isn't an extension of the PCI bus, they do exist separately. If you look at a chipset layout, AGP is controlled directly by the northbridge, as the AGP controller gets direct access to the memory system without passing anywhere near PCI.

There are (or will be) boards with PCI and PCI-X slots on them. I think it was Intel boards I saw mentioned in this case but as PCI-X becomes usable and devices become available, there'll be many boards with both types of slots, just like there are still boards with ISA and PCI.

There's really no need to separate the drives from the bus, as long as the PCI bus is made fast enough. Hard drive controllers, even with lots of fast drives, still don't need the same capabilities that caused AGP to come about (which was really about making systems cheaper than making them faster). Separating them would just increase complexity of the chipsets and motherboard designs needlessly. If you go down that road, we may as well design a separate bus and slot-type for every possible device, so they each get dedicated access to the chipset.

Looking at the layouts for SiS chipsets, it looks like the IDE ports don't even ride on the PCI bus anymore, so I'm not even sure the onboard controller is reliant on the PCI speeds, but goes directly to the southbridge and over the bus to the northbridge. So onboard controllers that are integrated into the chipset may not be affected by the low PCI bus bandwidth. Up to now, there's not been much difference between add-on controllers (including HighPoint/Promise chips mounted on the motherboard which use the PCI bus) and integrated controllers, but this could be the point where a difference does start to show, if the burst rates of drives, especially in RAID, can reach the 150MBps rate.
 
AGP isn't an extension of the PCI bus, they do exist separately.

Right you are. I guess for some reason I was under the impression that it was basically a 2X PCI, but upon further research it's clear that the AGP bus while a superset of the PCI bus is architected quite differently. I think in the future motherboard makers may not have to adopt a "SiS-like" approach by keeping the EIDE devices off the PCI bus. Looking at the PCI-X spec sheets it looks as though future PCI bus architectures will vastly outstrip our ability to transfer data from fixed storage. PCI-X supports 10Gb ethernet. If HDD technology doesn't keep pace, we could be pulling data off the network as fast or faster than we pull it from our HDD's.
 
i have already seen a pic of a sata controller, it was on a 66MHz\64bit pci card.

considering that current HDD limitations come from a combination of the HDD speed itself, and the pci bus, sata should offer some performance enhancements to existing ata implementations (adapters exist, and are pretty small).
 
SerialATA can't provide any enhancement for two reasons: one drive per channel, and hard drives that CANNOT keep up with a 150MBps throughput, or even 100MBps throughput except sometimes in a burst.

Since the 150MBps applies to each channel, just like ATA/133 applies to each channel, there's only a slight increase in bandwidth, and no hard drive will come near that for a while. And since you can't have more than one device active on a single channel for either type, the extra bandwidth can't be used to "aggregate" multiple drives' bandwidth needs.

SerialATA would have been SO much better a spec if they'd allowed multiple devices (at least 2) on a single cable, and for both devices to be active at the same time. Even if it had meant like 2 extra pins on the cable, it'd still be smaller than a ribbon cable and the increased utilization of the available performance would have been more cost-effective. Making controllers that just keep getting more and more bandwidth available per-port is useless if the drives simply can't use that bandwidth.
 
Back
Top