• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Does HDT (HyperData Transport) use a real FSB across the CPU-Chipset?

MadRat

Lifer
The whole idea of HDT was to break the reliance on the timing of the memory for the timing of the system bus, and vice versa. If the HDT bus no longer ties the FSB to the system bus then does that mean you can run the FSB to the CPU independent of any effect on the HDT bus?

I mean, prior to HDT you had to watch your FSB settings because of thier effect on the PCI, AGP, and ISA slots. If you raised the FSB to say 75MHz then the PCI and AGP ran at 37.5MHz and 75MHz respectively. With HDT I'd think its possible to keep the PCI and AGP at 33MHz and 66MHz respectively.

Is this the case?

I also wonder if we'll be able to overclock the HDT bus. :Q

*hopes Anand reviews real-life specs of HDT soon* 😎
 
(I assume you are talking about Hyper Transport, formerly known as LDT)

From what I have read, it will operate independantly from the FSB speed of the CPU. Seeing as it is a point-to-point setup and not a shared bus, the data transfer between the two components that are "talking" will run at the speed of up to 800MB/s .

Thing is, as far as I can tell, Hyper Transport is only for the internals of the motherboard. (ie, Northbridge to Southbridge, etc), I don't believe it is for peripherals like the PCI or ISA bus is.

I have yet to read whether the PCI and ISA bus' run off the H/T bus in the new motherboards, or if they are still linked to the FSB.

I do remember hearing something about InfiniBand, which is set to replace PCI/ISA, and it works of the H/T bus.

As for overclocking, it's theoretical limit right now is 12.8GB/s. Do you really need any more bandwidth? 😀
 
Infiniband isn't going to replace PCI...that's PCI-X's job. Infiniband / Rapid I/O is Intel's distributed I/O to connect servers and mass storage...it's going up against gigabit ethernet and fiber-channel.

There may be a bunch of standards for future bus protocols (Hypertransport, Rapid I/O, Infiniband, V-Link, 3GIO), but the decision of which one to use is up to the chipset manufacturer, as for our purposes they are meant to link the northbridge to the southbridge. Any chipset using a future bus protocol will have a PCI or PCI-X bridge (PCI-X is backwards compatible with PCI), so we don't have to worry about compatibility.
 
Thanks for clearing that up Sohcan!

It seems alot of the articles I have read were pretty fuzzy on the issues.
 


<< Seeing as it is a point-to-point setup and not a shared bus >>


point to point does not mean non-shared bus, that's implementation dependent, though for most it is VIRTUALIZED EXCLUSIVE BANDWIDTH CHANNEL ACCESS.
 
A true crossbar design has no northbridge nor southbridge. 🙂, one of the simplest block diagram for it is a star diagram. IE, all devices such as CPU, AGP, ETC... are points of the star, crossbared from each others with memory being the central point, all with silmultaneous exclusive access to the same memory.
 
What did that have to do with HDT? Are you suggesting HDT is a star array? I've seen nothing leading to this conclusion.
 
Back
Top