• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Dual GPU Fermi on the way

http://vr-zone.com/forums/519373/gf100-will-be-fastest-gpu-dual-gpu-fermi-is-coming-.html

I wonder how Nvidia is going to cool this Video card? How long could it be?

Will this be a low clocked part like the HD5970?

Does anyone think it is possible we could see two 8 pin PCI-E power connectors on Dual GPU Fermi?

NV is going to cool the card with no issues.
ATI managed to cool their X2 card, so NV will be more than able to do the same, since there's a 300w limit according to the PCIe spec. That also means the only reason to have 2x8pin is for overclocking.

The main issue NV will have is making sure it's <300w power wise, which almost certainly means reducing clicks vs the top end Fermi based single card.
 
NV is going to cool the card with no issues.
ATI managed to cool their X2 card, so NV will be more than able to do the same, since there's a 300w limit according to the PCIe spec. That also means the only reason to have 2x8pin is for overclocking.

The main issue NV will have is making sure it's <300w power wise, which almost certainly means reducing clicks vs the top end Fermi based single card.

HD5970 is down clocked pretty good to make the 300 watt limit.

I can only imagine the clocks on Fermi will be even lower than the single GPU variant.

Speaking of Overclocking, do you think board partners will release 2x8pin variants?
 
Lol.

Dual RV870 pushes 300W already; the ceiling is actually 375W.

How the hell will dual Fermi, with its 384 bit bus and 3 Bn trans per chip, stay within 375W?

There's not much room within spec for it to expand to a 2-GPU solution.

Unless Jen-Hsun takes an icepick to its frontal lobes...
 
Lol.

Dual RV870 pushes 300W already; the ceiling is actually 375W.

How the hell will dual Fermi, with its 384 bit bus and 3 Bn trans per chip, stay within 375W?

There's not much room within spec for it to expand to a 2-GPU solution.

Unless Jen-Hsun takes an icepick to its frontal lobes...

Didn't the Anandtech article say 300 watts is the maximum for the ATX spec (according to PCI SIG)?

Can Nvidia even release a 2x8 pin Dual Fermi? (which corresponds to 375 watts)
 
Didn't the Anandtech article say 300 watts is the maximum for the ATX spec (according to PCI SIG)?

Can Nvidia even release a 2x8 pin Dual Fermi? (which corresponds to 375 watts)

Now that you mention it yes... I can't find official documentation that supports 375W, only 300W, so I appear to be mistaken. Physically, 375W appears to be possible though.

Maybe there will be a new spec in the future? 5970 has the power pinouts, plus the empty VDDC slave slots for more current.

Still... 2 Fermis on one card? Facts remain: 3 Bn Transistors per chip, 384 bit GDDR5 bus. It would be even more lobotomized than I thought if it's limited to 300W as a dual GPU config.

Not a fantastic showplace for the tech. An that ought to qualify as an understatent.
 
The ATX spec may be for 300W, but if you add in enough additional connectors, you can break the spec. The card will work fine, you just can't call it a video card that is under the umbrella of the ATX spec. 😉
 
The ATX spec may be for 300W, but if you add in enough additional connectors, you can break the spec. The card will work fine, you just can't call it a video card that is under the umbrella of the ATX spec. 😉
Which surely creates some other nice problems, otherwise I can't imagine why ATI wouldn't have done the same for their 5970 cards.
 
Perhaps a dual gpu flavor after a die shrink?

Ehh...they (TSMC) are only now getting 40nm online and you think nVidia has time to wait two years for TSMC to get 32nm before making a dual GPU Fermi? The most likely scenario is a lower clocked Fermi. Something like what ATI did with the Radeon 5970.
 
According to AMD a dual PCB 'sandwhich' card is breaking the ATX spec, which Nvidia doesn't mind building. So who knows if Nvidia will pay any attention to the 300 watt limit as they already are breaking the ATX spec with sandwich cards.

*edit - Three trouts in a row. Unit fish!
 
Wild speculation here...

Perhaps NV is planning on binning Fermi cores that won't function as full fledged GTX 380s for a dual gpu card that meets the ATX spec. The large volume of these defective chips may be why they will be launching a dual gpu card 'sooner than expected'.
 
You mean, kind of like AMD's X3 CPUs? (well in this case it'd be.. x3x2? or should it be x2x3?) That'd be interesting. 🙂
 
Wild speculation here...

Perhaps NV is planning on binning Fermi cores that won't function as full fledged GTX 380s for a dual gpu card that meets the ATX spec. The large volume of these defective chips may be why they will be launching a dual gpu card 'sooner than expected'.

That seems unlikely. ATi had to go the other way with the 5970, and bin chips that would function at a lower voltage than 5870 chips, just to stay below 300W. They are not going to be able to make a dual GPU card worth anything with defective chips.

'Sooner than expected' is probably entirely because of the 5970.
 
According to AMD a dual PCB 'sandwhich' card is breaking the ATX spec, which Nvidia doesn't mind building. So who knows if Nvidia will pay any attention to the 300 watt limit as they already are breaking the ATX spec with sandwich cards.

*edit - Three trouts in a row. Unit fish!

Why would anyone involved in drafting the ATX spec standards care how many PCB's you shove onto the PCB that is slipping into the ATX slot on the motherboard?

If it is electrically compatible with the standard and conforms to the physical dimension restrictions of the standard I can't really fathom why anyone 5 or 10yrs ago would have cared to count the number of physically distinct PCB's and regulate that as part of the spec.

That would be like regulating the PCB color, or the maximum allowed number of components like caps and vrms regardless their electrical specs or usage.

Why would you, or I, or the people responsible for drafting the spec care whether the cards have one, two, or five PCB's "under the hood" provided the integration of the PCB's did not create a product that itself violated electrical spec (power) or physical dimension (weight, etc) concerns? Makes no sense to me.
 
That seems unlikely. ATi had to go the other way with the 5970, and bin chips that would function at a lower voltage than 5870 chips, just to stay below 300W. They are not going to be able to make a dual GPU card worth anything with defective chips.

'Sooner than expected' is probably entirely because of the 5970.

Preeetty sure that the gtx260 is a "defective" gtx 280, hence why 24-44 stream processors are DISABLED, and not completely stripped from the hardware. So binning chips for a dual GPU card is not at all unlikely.
 
Does it mean that Fermi has no chance against 5970 and they need dual Fermi as well?

I would hope for AMDs sake that a single Fermi chip couldn't beat an HD5970, because otherwise AMD are going to be seriously outclassed.
Dual Fermi gives them bragging rights for holding the best performing single slot card, they won't be able to get that with a single chip.
 
Now wouldn't it be interesting if it was 2 full blown fermi cores at full clocks but with a voodoo 5 6000 style breakout box power supply. Running basically one core system power and the other from external. I know this is completely unlikely but its just fun to mention 🙂
 
Back
Top