• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

HyperTransport(TM): What does it all mean?

walla

Senior member
I have read about HyperTransport. I was curious as to what it was since the retail box of my A64 prominantly displays it as if it were a selling point.

From what I understand, it is a Consortium/Convention (such as PCI, USB) that outlines a method for implemetning a "high bandwidth, low latency, *superlative*" data transfer from "chip to chip". From reading AMD docs, it is also possible to "scale" processors using HyperTransport (perhaps for facilitating clustering or multiple-processor configurations in servers).

I guess I am curious: is this a big deal or marketing gimmick (or a bit of both)? Is this the future or a passing trend? What kind of impact (if any) it it having in the computer engineering field?

Any thoughts welcome.
 
It's quite big a deal. It's an open standard chipset interconnect, really really fast, capable of interconnecting multiple CPUs just as well as I/O to CPUs. Even mixing and matching chipset components from several vendors works very well. The latest revision even lets you run the HT interconnect over a slot connector - which is now being used in extremely low-latency clustering (this has been in the press just last week).
 
From what I have read so far...

HyperTransport relies on a few key methods in order to produce reliable, low latency signals.

1)LVDS - low voltage, differential signaling which i understand "immunitizes" the signal from noise.
2)Unified Clock Signal - does not need more than perhaps one or two clock signals to transport, which I understand reduces overhead (read decrease latency) compared to say the PCI alternative.
3)Smaller packet size - again, less communication overhead

However, I wonder...doesn't increasing (effectively doubling) the number traces in close proximity increase coupling capacitance?Differential signaling would magnify that. And lowering the voltage would seem only to hurt the rate at which the signals could propage due to capacitance. How does Hypertransport/LVDS deal with these issues...or are they small enough to be ignored currently? I suppose HyperTransport does claim to reduce the number of traces when compared to PCI...so perhaps that is where the gain is. It also claims to need only about 200mv of differentiation for receiver input, so this too might be a way to offset the increase of capacitance.

Does anyone have thoughts from this perspective? Or literature that compares and contrasts HyperTransport from a non-biased point of view (ie not from HyperTransport consorium or PCI sources).

Thanks.
 
Actually the lower the voltage peak-to-peak the faster your switching can be. Slew rates in differential signaling isn't as important as it is in other systems. Slower slew rates mean that you won't get as much EMI being emitted from the signal that is propagating and reduces the effects of signal bounce from irregularities in your data line, which help reduce crosstalk. Smaller voltage amplitudes also means that you're less likely to have crosstalk in this system, as crosstalk is made worse by larger swings in voltages. The differential aspect eliminates common mode noise which is one of the bigger sources of noise in a system. Putting more traces in closer proximity will increase the coupling capacitance, but for the most part implementing these aspects of HT are enough to offset it. It also depends on the layout of the board. There are design recommendations that are published for mainboard manufacturers, so as long as they adhere to the design rules it shouldn't be much of a problem. You might start seeing integrity problems when you go to 1ghz HT and up though.
 
Is there any type of special bus encoding implemented to reduced the number of adjacent signal transistions and static power consumption? Or is this not necessary?

Also, could anyone explain the drawbacks of this approach (LVDS in general or HyperTransport specfically) compared to a single-ended approach? The only one I see right now is the increased number of pins/traces required. I suppose that might be restrictive where realestate is at a premium.
 
Hypertransport uses packets encoded to correlate to bit times throughout the signaling. The amount of bit times you're going to have per packet will vary depending on the width of the bus. There are some chipsets out there that support tristating the HT bus when the system enters a power savings mode which will save you static power consumption. Also remember that HT can operate at 2,4,8, or 16 bit widths, so HT is flexible enough to work around most realestate problems. 200mhz 8bit would be enough for most southbridge I/O activity for legacy PCI/LPC/IDE/ISA devices. So in theory we could use a 2 bit 800mhz HT bus for the SB I/O and realize the same bandwidth we had with 8bit 200 mhz.
As for the main drawback of LVDS, like you said before, it takes 2x the amount of traces. But that is offset by the fact that you can have far faster switching and exceed the bandwidth that a nonLVDS system would offer assuming same bitwidth on an HT link.
 
Back
Top