Has the time come for dual-core GPUs?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

NoQuarter

Golden Member
Jan 1, 2001
1,006
0
76
Packaging 2 GPU's together on a single die to make a 'dual-core' GPU makes no sense over simply increasing the shader cores/ROP's etc. When you package 2 on a die they would still operate independent of each other. So you gain the complexities and inefficiencies of SLI/CF, each 'core' would have to render its own frame (AFR) or guess at the workload sharing and do SFR.

Performance and driver wise it makes more sense to take advantage of the inherent parallelism in GPUs and continue scaling them up instead. By scaling them up they are duplicating the parts that do the work without duplicating the front end.
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
its really doesn't make sense to make dual core gpu, but I really want amd to use different rendering, something that doesn't need to mirror vram just like lucid hydra multi gpu setup
 

FalseChristian

Diamond Member
Jan 7, 2002
3,322
0
71
The reason you have multiple CPUs on a single die is because Intel and AMD have hit a frequency wall at about 3.4GHz max. If CPUs could keep going up in frequency than you wouldn't need multi-core CPUs. GPUs don't have this limitation. By increasing ROPs, Texture Units and CUDA cores nVidia, for example can increase processing power using a single GPU.
 

betasub

Platinum Member
Mar 22, 2006
2,677
0
0
Hasn't some articles described the 69xx architecturally like a dual-core 68xx?

If they did, they'd be way off the mark, considering they are the new VLIW-4 and old VLIW-5 respectively. (Maybe 5870 = 2x 5770 ?)
 

(sic)Klown12

Senior member
Nov 27, 2010
572
0
76
If they did, they'd be way off the mark, considering they are the new VLIW-4 and old VLIW-5 respectively. (Maybe 5870 = 2x 5770 ?)

It comes from the fact that Cayman has dual "Graphic Engines". So even if they shared the same VLIW design it still wouldn't count.
Cayman


Barts
 
Last edited:

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
Well, I guess they could make a dual core GPU, but it wouldn't make any sense.

They would be making 2 cores with 16 rops/128 depth units each, 32 TMUs each, and 192 shaders each, and that's not any more efficient than a single core with 32 rops/256 depth units, 64 TMUs, and 384 shaders.
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
A single massive die.

I guess it doesn't really make sense. If it did, they would be doing it by now.

They already are doing things with a single massive die:

i7-2600k quad core 216 mm2 die size
i7-990x hex core 239 mm2 die size
GTX-560 360 mm2 die size
Radeon 6970 389 mm2 die size
GTX-580 520 mm2 die size

Top end GPUs are already single massive dies.

All the top end GPUs also consume significantly more power than a top end CPU. They're getting to the point where the thermals just can't support much more in the space given to a video card without exotic cooling (water, etc...)

Compare thermals to older top end cards and you can see they've really pushed the massive single die thing pretty far lately:
http://www.geeks3d.com/20090618/graphics-cards-thermal-design-power-tdp-database/

Highest single GPU was a 285 at 205 watts in that gen. Even the 470 of the next gen exceeded that, the 480 and 580 are significantly higher still. AMD is pushing thermals much higher than their 4xxx and 5xxx cards with the 6970. Compare to the nVidia 9xxx series, where the highest single GPU card was 140ish watts and you're not even in the same ballpark today, even a "low end" card like a GTX460 uses more than that.

Remember the x1900XTX that got a lot of flak in those days for being such a power hungry hot card? yeah, 135 watts. If you're using anything bigger than a 5770 these days, you have yourself a 'power hungry' card.

They've already pushed in this direction. It's very clear they have when a card that most people won't even consider because it's too low end, a GTS450, consumes more power than an old value king... a 9800GT. Looking at the TDP really puts into perspective just how much they've pushed in the direction you're suggesting they push in.
 

SHAQ

Senior member
Aug 5, 2002
738
0
76
It's a moot point since every GPU would need water cooling or a move to tri-slot air coolers. They also need to be engineered. GPU's have a shorter life cycle than CPU's which would make R&D more expensive. It might happen as a stop gap measure when the silicon transistor can't be shrunk any more and there is a time lag before graphene, nanotubes or whatever replace silicon. That will be a troubling matter fairly soon (3-5 years).
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Hasn't some articles described the 69xx architecturally like a dual-core 68xx?

I suspect you are thinking of Cypress actually (HD58xx).

http://pc.watch.impress.co.jp/docs/column/kaigai/20090923_317273.html

cypress.jpg


Responsible for the design of the bus said Fritz Kruger (Architect, AMD) is explained as follows.
"From the perspective of system architecture, this Cypress is similar to RV770. The only major difference is that the dual core. CPU dual-core and are different, a bit like how we are. ( GPU inside the bus) can not scale up very crossbar. Therefore, CPU's the same reason two separate cores, but also divided into two core we have to simplify the bus structure,"
 

tigersty1e

Golden Member
Dec 13, 2004
1,963
0
76
It's so ironic. As much as single gpus are masters at multicores (shaders, cuda cores) working on conjunction with each other. You slap 2 gpus together and the scaling is not as ideal as it should be.
 

lamedude

Golden Member
Jan 14, 2011
1,222
45
91
Voodoo2 says hi. It was basically a Voodoo1 with a 2nd TMU die and like single threaded programs on a CPU if the game didn't use dual textures the 2nd TMU did nothing.