Has the time come for dual-core GPUs?

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
I'm just thinking, seeing as how a pair of GTX 460s is as fast as a GTX 580 in most cases, would it not make sense for nVidia to stich a pair of GF104 GPUs together on the same die?

I'm wondering if the GPU makers are finally hitting the same thermal wall that the Pentium 4 hit several years ago.
 

aphelion02

Senior member
Dec 26, 2010
699
0
76
I don't think its the same, GPUs are already massively 'multi-cored', so to speak. I think the limitation is really the die size? Hence you have multi-gpu cards.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
I don't think its the same, GPUs are already massively 'multi-cored', so to speak. I think the limitation is really the die size? Hence you have multi-gpu cards.
There are still parts of the GPU that are not massively multi cored, such as the ROPs.

I'm just thinking that a pair of GTX 460s would be cheaper to produce for nVidia while still being as fast as a GTX 580. It would also have saved them the engineering cost associated with such a massive design.
 

f4phantom2500

Platinum Member
Dec 3, 2006
2,284
1
0
i don't think that would be too much different from sli/crossfire or multi-gpu cards. the only advantage i could think of in having 2 gpus on the same physical die (as opposed to 2 independent dies on the same card) is lower latency between the 2. but, as the designs of modern gpus are already massively parallel, i don't think that this difference would be that substantial. furthermore, to use your 460/580 example, the gtx 460 has a die size of 332 mm^2, while the 580 has a die size of 520 mm^2. so if you assume a straight doubling of die size, you're looking at a 664mm^2 die. if this is accurate, then i don't think this is a viable product for a company like nvidia to manufacture.


There are still parts of the GPU that are not massively multi cored, such as the ROPs.

I'm just thinking that a pair of GTX 460s would be cheaper to produce for nVidia while still being as fast as a GTX 580. It would also have saved them the engineering cost associated with such a massive design.


wait so what exactly are you talking about, 2 independent gpus on 1 card or a single massive die comprised of 2 gpus sandwiched together? because the former already exists, you know.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
wait so what exactly are you talking about, 2 independent gpus on 1 card or a single massive die comprised of 2 gpus sandwiched together? because the former already exists, you know.
A single massive die.

I guess it doesn't really make sense. If it did, they would be doing it by now.

It's all marketing. The GTX 580 costs 5X as much as a GTX 460 while only having twice as much die size.
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,393
8,552
126
A single massive die.

I guess it doesn't really make sense. If it did, they would be doing it by now.

It's all marketing. The GTX 580 costs 5X as much as a GTX 460 while only having twice as much die size.

it's easier from a driver standpoint to just double the resources for the one core than to have two separate cores, i'd imagine.
 

Gikaseixas

Platinum Member
Jul 1, 2004
2,836
218
106
it's easier from a driver standpoint to just double the resources for the one core than to have two separate cores, i'd imagine.

agree but the negatives would outweight the positives IMO. First the engineers must come up with a much efficient thermal solution...
 

pcm81

Senior member
Mar 11, 2011
598
16
81
dualcore GPUs are a way back when 1999....
For example RadeonHD6970 is 1500-core gpu and GTX580 is 512 core gpu. The cores on 580 and 6970 are different though, 6970 is SIMM architecture, while 580 is MIMM architecture.
 

brybir

Senior member
Jun 18, 2009
241
0
0
agree but the negatives would outweight the positives IMO. First the engineers must come up with a much efficient thermal solution...



The biggest hurdle from a design, manufacturing and profit standpoint is fabrication.

Every wafer that is fabricated encounters some errors that cause yields to decrease. The larger a die, the larger the chance that a single error renders that die useless.

Basically, if a wafer has 100 possible cores and 5 errors occur, they would harvest 95 working cores. So now double the core size, and we have 50 possible cores and would harvest 45 cores. We still lose five, but the five lost out of the 50 is twice as expensive as the cost per wafer remains the same.

That is just one example of fabrication problems the larger a chip gets. See for example this discussion:

http://www.quora.com/Why-cant-CPUs-be-physically-bigger



Edit: GPU dies are already some of the largest dies fabricated.
 

brybir

Senior member
Jun 18, 2009
241
0
0
it's easier from a driver standpoint to just double the resources for the one core than to have two separate cores, i'd imagine.

I think you are right, but it also presents a host of other engineering problems on the fabrication side, so I suppose its a cost benefit overall.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
it's easier from a driver standpoint to just double the resources for the one core than to have two separate cores, i'd imagine.
Yeah but they need to create those drivers to allow for GTX 460 SLI to begin with.

I have a feeling that it would make some sense for AMD/NV to go this route. Perhaps we'll see it happen at some point. It would actually make more sense for AMD given their design philosophy (they no longer target the high end of the market).
 

SHAQ

Senior member
Aug 5, 2002
738
0
76
The 580 has 512 cores already and putting two on 1 PCB is "dual core" in the way you are thinking of it. It's already topped out as it is. Putting 2 GPU's together I imagine wouldn't be thermally doable. Imagine what a heatsink would look like that had to cover 2 massive GPU's at over 100 watts a piece. The Megahalem can only cool somewhere around 180 watts or so. If they were linked internally GPU scaling would be moot(I think) but scaling is pretty good considering the GPU's can be cooled separately. Finally wouldn't it be expensive to engineer on top of it?
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
A single massive die.

I guess it doesn't really make sense. If it did, they would be doing it by now.

It's all marketing. The GTX 580 costs 5X as much as a GTX 460 while only having twice as much die size.

They are doing it now. Every GPU is a "multi core, single die" setup.
Also there's more to a graphics card cost than how big the GPU die is...

GT 430:
GF108&



GTX 580
GF100small.png


"Uncore" = memory controllers, gigathread engine, host interface and L2 cache.


Here's a Core i7 Quad core:
Nehalem_Die_callout.jpg


4xcore = the green blocks on the NV die.
The rest = memory controller, cache, etc, "uncore" stuff.

Here's a Core i3 dual core: (the chip on the right, the left is the on-package GPU).
clarkdaledie.jpg

Most things the same, but with 2 less cores, aka less green blobs.


If you meant two GPUs on the same package (like in the bottom shot, or like the old Intel quad core chips), then how would that help a thermal wall, and how would it be particularly more effective than two GPUs on the same board unless they moved the memory controller off-chip and made it a third chip which was also on the same package, so you end up with <die> <memory controller> <die> and try and share the memory across two chips without having any duplication?
 
Last edited:

Meaker10

Senior member
Apr 2, 2002
370
0
0
There are still parts of the GPU that are not massively multi cored, such as the ROPs.

I'm just thinking that a pair of GTX 460s would be cheaper to produce for nVidia while still being as fast as a GTX 580. It would also have saved them the engineering cost associated with such a massive design.

What is not parallel about a chip having 8-32 Rops?
 

VirtualLarry

No Lifer
Aug 25, 2001
56,571
10,207
126
If it were a valid engineering idea, and the market supported it, it would have been done already. Therefore, it doesn't make sense. The reason why is left as an exercise for the reader.
 
Nov 26, 2005
15,189
401
126
Didn't Cypress almost do this but the idea got cut due to keeping within ? budget or thermal envelope er something? I remember reading about how they were going to link something together but never did.. there was a diagram of the design.. the engineer was maybe the head of Fusion ?

sorry, just goin on vague recal
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
If it were a valid engineering idea, and the market supported it, it would have been done already. Therefore, it doesn't make sense. The reason why is left as an exercise for the reader.
It seems as though for something like this it takes an engineering failure to bring it about. We would still be on single cored CPUs if it had not been for the Pentium 4.
 

betasub

Platinum Member
Mar 22, 2006
2,677
0
0
Didn't Cypress almost do this but the idea got cut due to keeping within ? budget or thermal envelope er something? I remember reading about how they were going to link something together but never did.. there was a diagram of the design.. the engineer was maybe the head of Fusion ?


Was that SidePort? Giving direct GPU-GPU communication to improve expandability of the graphics units.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
If you meant two GPUs on the same package (like in the bottom shot, or like the old Intel quad core chips), then how would that help a thermal wall, and how would it be particularly more effective than two GPUs on the same board unless they moved the memory controller off-chip and made it a third chip which was also on the same package, so you end up with <die> <memory controller> <die> and try and share the memory across two chips without having any duplication?
I did mean two GPUs on the same package, similar to how Intel used to stick a pair of Pentium 4s together to create the Pentium D.

It's more effective in the sense that they only have to do the R&D for a midrange-class GPU. They can completely forgo the entire creation of something high-end like the GTX 580.

I'm sure that the GTX 580 is the more elegant solution at this point, however there is no denying that a pair of GTX 460s can keep up with it while consuming about the same amount of power under load, and considerably less power when idle.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
I did mean two GPUs on the same package, similar to how Intel used to stick a pair of Pentium 4s together to create the Pentium D.

It's more effective in the sense that they only have to do the R&D for a midrange-class GPU. They can completely forgo the entire creation of something high-end like the GTX 580.

I'm sure that the GTX 580 is the more elegant solution at this point, however there is no denying that a pair of GTX 460s can keep up with it while consuming about the same amount of power under load, and considerably less power when idle.

You would need to somehow deal with memory access.
 

pcm81

Senior member
Mar 11, 2011
598
16
81
It seems as though for something like this it takes an engineering failure to bring it about. We would still be on single cored CPUs if it had not been for the Pentium 4.

??? what you mean?

P4 was not a failure, it simply pushed the limits of then existing manufacturing, so they could not clock it above 2.4GHz and had t o go dual core with core2 design to give more processing power with same gate size, clock speeds... OTOH GPUs are already N-Core systems, so the challenge with them is to manage power density and keep them cool. Dual gpu core is a bad idea, because either it means you will have allot less gpu cores (2 cuda cores vs 512 there are now) or if you are thinking of lumping 2x GPUs together on a single chip (integrated 590 or 6990 on 1 chip) then you are also in trouble because of power density and you cant keep such a system cool. Notice how 6970GPU only has 1500 stream processors vs going to 3200 which would be double of 5970. 5970 doubled the stream processors of 4870 (800 to 1600) but all that 6970 could do is speed up these strean processors, but they could not stick 3200 of them on a single chip due to power density concerns...
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
It bothers me when people describe GPUs as being multi-cored. I can't say what the criteria needs to be to consider whatever part of the GPU as a "core" but considering they are designed from the onset to be massively parrallel, I would leave out "multi-core" talk unless we are talking about physical dies.
 

pcm81

Senior member
Mar 11, 2011
598
16
81
It bothers me when people describe GPUs as being multi-cored. I can't say what the criteria needs to be to consider whatever part of the GPU as a "core" but considering they are designed from the onset to be massively parrallel, I would leave out "multi-core" talk unless we are talking about physical dies.

Well, lets sook on wiki:
http://en.wikipedia.org/wiki/Multi-core_processor

Personnally I'd argue that a core is a chunk of hardware capable of executing a set of instructions, 1 instruction at a time. So what we have are 2 types of multicore infrastructures MIMM and SIMM (multiple instruction multiple memory (Multicore cpus and GTX GPU with Cuda Cores) and single instruction multiple memory Radeon GPUs with multiple stream processors)