Dual GPU Fermi on the way

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

T2k

Golden Member
Feb 24, 2004
1,665
5
81

So the source is a supposed Turkish-language site, using German domain where they reported that some Italian NV PR guy claimed it will be the fastest card and that they will have a dual GPU solution?

Wow, breaking news...

I wonder how Nvidia is going to cool this Video card?
Umm, lemmesee... with air? :D

How long could it be?
Probably 12+ inch, just like 5970 is.

Will this be a low clocked part like the HD5970?
Obviously. And I bet they won't even come close to 5970's clocks...

Does anyone think it is possible we could see two 8 pin PCI-E power connectors on Dual GPU Fermi?
I doubt it, it supposed to be a 40nm part, remember. Even my 5800 Quadros have 6+8 IIRC and they don't even require both of them to be connected if one can supply enough juice.
 

nosfe

Senior member
Aug 8, 2007
424
0
0
Reading between all the spin, i find it interesting that he said that Fermi will be the fastest GPU on the market but he didn't say anything about the performance of the dual GPU version.

It will be interesting to see if Nvidia will also release a 300w dual GPU card as it would settle all the performance/power consumption discussions (well, not really but still)
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Why would anyone involved in drafting the ATX spec standards care how many PCB's you shove onto the PCB that is slipping into the ATX slot on the motherboard?

If it is electrically compatible with the standard and conforms to the physical dimension restrictions of the standard I can't really fathom why anyone 5 or 10yrs ago would have cared to count the number of physically distinct PCB's and regulate that as part of the spec.

That would be like regulating the PCB color, or the maximum allowed number of components like caps and vrms regardless their electrical specs or usage.

Why would you, or I, or the people responsible for drafting the spec care whether the cards have one, two, or five PCB's "under the hood" provided the integration of the PCB's did not create a product that itself violated electrical spec (power) or physical dimension (weight, etc) concerns? Makes no sense to me.

I don't care how much it weighs, if there are two PCB's, how long it is (so long as it fits in my case), etc. Maybe AMD mentions that for marketing reasons, nothing else. I was just trying to point out that Nvidia doesn't mind stepping outside of the 'ATX spec' so they very well may make a part that uses more than 300 watts.

If Nvidia does jump past 300 watts with their dual GPU part, AMD may wish that they did the same since there will probably be a good chance that Nvidia's dual part would be faster. Who knows, maybe we'd see a 5990 at a higher voltage and clocks.... well, probably not. But you never know.
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
If Nvidia does jump past 300 watts with their dual GPU part, AMD will release a similar, OC-part within weeks, clocked way ahead of the more complex dual-Fermi so this '5975' would end up being way faster again.

Fixed for ya'... :awe:
 

schenley101

Member
Aug 10, 2009
115
0
0
Who wants to pay probably at least $1000 for a video card...If only i had the money. also, i don't think nvidia cares about the atx spec for this card because it would be very low volume and for ubergeeks only that think that 1kW power supplies are small.
 

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
The dual card could be two 360s and still take the crown. I'm saying it will but for an April/May release one would hope. ATI will counter with a refresh.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Even my 5800 Quadros have 6+8 IIRC and they don't even require both of them to be connected if one can supply enough juice.

Over at Guru of 3d they overclocked hd5970 to 935 Mhz on both cores. Power consumption ending up being 380 watts.

How does a video card exceed 300 watts from a single 8 pin and 6 pin?

Guru of 3d said something about needing a PSU with a large 12V rail being necessary for this.
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
Over at Guru of 3d they overclocked hd5970 to 935 Mhz on both cores. Power consumption ending up being 380 watts.

How does a video card exceed 300 watts from a single 8 pin and 6 pin?

Guru of 3d said something about needing a PSU with a large 12V rail being necessary for this.

What is so unbelievable in having a PSU with at least 35A on a 12V rail? Almost any modern unit above 600W can do it, I bet...
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
The dual card could be two 360s and still take the crown. I'm saying it will but for an April/May release one would hope. ATI will counter with a refresh.

Doubtful. If the GTX 295 was two GTX260s, it would have lost pretty soundly to the 4870x2.

In any case, a dual GPU Fermi is most certainly going to arrive much later than the single GPU version. Nvidia is having all of this trouble just making a single GPU card and you guys expect a dual one to come out soon? They couldn't even make a dual GT200 card until they had a die shrink, and even then they still had to cut it down. Fermi is a LOT bigger and more complex than the GT200. Me thinks the dual GPU Fermi is pretty likely to be the same scenario as the 295.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Doubtful. If the GTX 295 was two GTX260s, it would have lost pretty soundly to the 4870x2.

The situation is different now.

Nvidia increased memory bandwidth with Fermi (ATI did not increase bandwidth by much over their previous old top card). On top of that Fermi is more than double a GTX 285 core (while ATI is only doubled stream processors from RV790)
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
The situation is different now.

Nvidia increased memory bandwidth with Fermi (ATI did not). On top of that Fermi is more than double a GTX 285 core (while ATI is only doubled stream processors from RV790)

Edit: I misread

In any case, there is no way to know for sure(or even guess) how fast Fermi is, contrary to what some people would have you believe, there's zero evidence supporting any kind of idea for how fast it will be. We can only only estimate off of history from generation to generation.
 
Last edited:

T2k

Golden Member
Feb 24, 2004
1,665
5
81
The situation is different now.

Nvidia increased memory bandwidth with Fermi (ATI did not increase bandwidth by much over their previous old top card).

ATI wasn't and still isn't bandwidth starved - bandwidth alone does not mean crap if there's not enough demand for it. :)

On top of that Fermi is more than double a GTX 285 core (while ATI is only doubled stream processors from RV790)

It's been publicly stated by Nvidia that a lot of Fermi's extra trannies went into offering DP for GPGPU applications - something games won't use. Aside of being useless for games this introduced a bigger problem: they have to clock down the monster chip to avoid a truckload of problems (wattage vs supply rails, heat vs cooling etc.)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
ATI wasn't and still isn't bandwidth starved - bandwidth alone does not mean crap if there's not enough demand for it. :)

Yeah but in BFG10K's HD5770 tests performance increased almost as much with increasing memory bandwidth as it did with increasing core speed.

HD5770 has the same ratio of core/MB as HD5870.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
It's been publicly stated by Nvidia that a lot of Fermi's extra trannies went into offering DP for GPGPU applications - something games won't use. Aside of being useless for games this introduced a bigger problem: they have to clock down the monster chip to avoid a truckload of problems (wattage vs supply rails, heat vs cooling etc.)

That is a good point then. I didn't know about that.
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
Yeah but in BFG10K's HD5770 tests performance increased almost as much with increasing memory bandwidth as it did with increasing core speed.

HD5770 has the same ratio of core/MB as HD5870.

Sure - I thought the topic was high-end cards, like 285/295 and Fermi. :)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Sure - I thought the topic was high-end cards, like 285/295 and Fermi. :)

ATI needs to increase the bandwidth on HD5870. Even HD4890 made small gains with more memory speed.

Fortunately the solution for ATI is 7 Gbps GDDR5. But until they feel the need to add that technology to be competitive it will never happen.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
The situation is different now.

Nvidia increased memory bandwidth with Fermi (ATI did not increase bandwidth by much over their previous old top card). On top of that Fermi is more than double a GTX 285 core (while ATI is only doubled stream processors from RV790)

Plus they moved to MIMD. Which could yield huge performance gains.

Doubtful. If the GTX 295 was two GTX260s, it would have lost pretty soundly to the 4870x2.

The 260 beat the 4870 in most benchmarks. As far as I know 4870CF was faster than the X2, so your hypothesis is incorrect.
 
Jan 24, 2009
125
0
0
The 260 beat the 4870 in most benchmarks. As far as I know 4870CF was faster than the X2, so your hypothesis is incorrect.

I'm going to disagree with that.

http://www.bit-tech.net/hardware/graphics/2008/10/06/amd-ati-radeon-hd-4870-1gb/1

http://www.guru3d.com/article/amd-ati-radeon-hd-4870-1024mb-review/1

http://www.anandtech.com/video/showdoc.aspx?i=3415

While it is quite close, if you look through every benchmark there you will find the 4870 wins more than it loses. And no, I did not cherry pick these, they were the first 3 on Google, so I think that's a fairly impartial selection on my part.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Over at Guru of 3d they overclocked hd5970 to 935 Mhz on both cores. Power consumption ending up being 380 watts.

How does a video card exceed 300 watts from a single 8 pin and 6 pin?

Because they're running it out of spec. At stock speeds, the 5970 should not exceed 300W (although it's close at 294W). By running the video card out of spec, they're also running the power supply lines going to the 6 and 8 pin PCI-E connectors out of spec. Both the card and the lines are apparently capable of handling the extra clock speed/power draw, but obviously it isn't recommended.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
ATI needs to increase the bandwidth on HD5870. Even HD4890 made small gains with more memory speed.

Fortunately the solution for ATI is 7 Gbps GDDR5. But until they feel the need to add that technology to be competitive it will never happen.
No, not at all. The 5870 is very core limited, even at 2560x1600. They only need a faster core (be it through raw clockspeed and/or optimizations) to make a faster card.
I'm going to disagree with that.

http://www.bit-tech.net/hardware/graphics/2008/10/06/amd-ati-radeon-hd-4870-1gb/1

http://www.guru3d.com/article/amd-ati-radeon-hd-4870-1024mb-review/1

http://www.anandtech.com/video/showdoc.aspx?i=3415

While it is quite close, if you look through every benchmark there you will find the 4870 wins more than it loses. And no, I did not cherry pick these, they were the first 3 on Google, so I think that's a fairly impartial selection on my part.
There was a reason NVIDIA quickly released the GTX260 216. ;)
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
The ATX spec may be for 300W, but if you add in enough additional connectors, you can break the spec. The card will work fine, you just can't call it a video card that is under the umbrella of the ATX spec. ;)

Actually, the spec that controls the maximum video card power draw is from PCI-SIG. They are the controlling body of the PCI specification.

http://www.pcisig.com/news_room/faqs/pcie2.0_faq/

Q11: I’ve heard mention that PCI-SIG is working on a new graphics spec – what is it? How is it different from the existing PCIe x16 Graphics 150watt-ATX 1.0 spec?

A11: PCI-SIG is developing a new specification to deliver increased power to the graphics card in the system. This new specification is an effort to extend the existing 150watt power supply for high-end graphics devices to 225/300watts. The PCI-SIG is developing some boundary conditions (e.g. chassis thermal, acoustics, air flow, mechanical, etc.) as requirements to address the delivery of additional power to high-end graphics cards through a modified connector. A new 2x4 pin connector supplies additional power in the 225/300w specification. These changes will deliver the additional power needed by high-end GPUs. The PCI-SIG expects the new specification to be complete in 2007.
In order to meet the current PCI-E v2.0 guidelines, a single video card may draw no more than 300w. So unless a manufacturer wishes to confuse the hell out of their prospective clientele by not being able to put "PCI-E" on the box, they are going to have to stay within the 300w envelope as mandated by PCI-SIG. Which is what ATi has done and which Nvidia will doubtless adhere to as well.

And from what I have been reading, PCI-E 3.0 doesn't appear to have any provisions for increasing the upper limit beyond 300w. So ATi/Nvidia are going to have to keep playing a balancing game between power draw and performance for their dual-GPU cards.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
The situation is different now.

Nvidia increased memory bandwidth with Fermi (ATI did not increase bandwidth by much over their previous old top card). On top of that Fermi is more than double a GTX 285 core (while ATI is only doubled stream processors from RV790)

While this is true, you also have to look at clock speed, which we don't know. Fermi is going to be a big, complex chip. Will Nvidia be able to get 750Mhz out of it? Will they get 625MHz out of it? Will the get 540MHz out of it? We just don't know.

And if Fermi becomes available in March, has a 10% performance advantage on average, and costs $475, will people buy it? Possibly 6 full months after the very good and complete 5870's (talking about all the features the 58x0 cards offer, all the improvements) have been available, and offer all the power the vast majority of people need right now on monitors that probably 98% of us have.

The only reason I'm looking forward to Fermi is to drop prices... either part will be fine on my 26" monitor, but I have a feeling AMD's will be cheaper. Guess we'll have to wait and see, Nvidia seems to be taking their time.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Fixed for ya'... :awe:

Really? How will they do that? The 5970 is already at the 300w power limit, and power usage skyrockets with much of a clock increase at all.

ATI wasn't and still isn't bandwidth starved - bandwidth alone does not mean crap if there's not enough demand for it. :)

Wrong again. This is why the 5970 is regularly beat by a CF 5870 with more memory bandwidth. More memory bandwidth is needed.
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
Really? How will they do that? The 5970 is already at the 300w power limit, and power usage skyrockets with much of a clock increase at all.
:rolleyes: Did you even bother to read what I wrote?
It was a fun reply to a theoretical scenario where NV already released a dual-Fermi exceeding 300W.

Wrong again. This is why the 5970 is regularly beat by a CF 5870 with more memory bandwidth. More memory bandwidth is needed.

I have no idea what are you talking about. :)