How come GPUs don't shrink as fast as CPU?

sparks

Senior member
Sep 18, 2000
535
0
0
We are at the dawn of the 45Nnm process for the next Intel CPU, yet GPUs from both Nvidia and AMD are stuck at 90nm and 80nm respectively. CPUs have gone from 90 to 65 to 45nm in a relatively quick time. Is there something inherent in GPUs that does not lend itself to die shrinks? You'd think GPU's would benefit more from a die shrink than the current generation of CPU.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
I think it's due to the fact that intel and AMD have the most advanced fabrication labs in the world. Intel in particular is known for their super advanced fab plants. It's actually why they have dominated AMD dispite having worse products like the P4.
 

Zebo

Elite Member
Jul 29, 2001
39,398
19
81
I agree ...Don't forget IBM though..where do you think AMD gets their help from.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
...also, latest-gen GPUs have something like 5-10x as many transistors as latest-gen CPUs. It's really hard to make a chip that big and that fast on a cutting-edge process. They're shrinking them as fast as they can.
 
Mar 11, 2004
23,444
5,852
146
GPU architectures change more drastically in a short period of time than CPUs typically do. AMD is still riding out a pretty similar architecture to the original Athlon 64s released what 3 years ago, and look how long Intel rode out the P4/Netburst architecture, and Core 2 is pretty similar to Pentium M which has been around for a few years now too. Over that same time we went from the FX series, to 6800s, to 7xx0, and 8800.
 

Roguestar

Diamond Member
Aug 29, 2006
6,045
0
0
Because GPUs aren't the same as CPUs. GPUs are more of a conglomeration of highly-threaded FPUs than a multipurpose processor like we're used to. First we'll see advances in power and then in design as we get more efficient with what we've got. Look at the evolution of Intel processors from Pentium 4 to Core 2.
 

DrMrLordX

Lifer
Apr 27, 2000
23,004
13,110
136
Originally posted by: darkswordsman17
GPU architectures change more drastically in a short period of time than CPUs typically do. AMD is still riding out a pretty similar architecture to the original Athlon 64s released what 3 years ago, and look how long Intel rode out the P4/Netburst architecture, and Core 2 is pretty similar to Pentium M which has been around for a few years now too. Over that same time we went from the FX series, to 6800s, to 7xx0, and 8800.

He's not talking about architecture, he's talking about process. And the OP is right . . . AMD and Intel are shrinking their fab processes faster than Nvidia and ATI (or any of the memory manufacturers for that matter).
 
Mar 11, 2004
23,444
5,852
146
Originally posted by: DrMrLordX

He's not talking about architecture, he's talking about process. And the OP is right . . . AMD and Intel are shrinking their fab processes faster than Nvidia and ATI (or any of the memory manufacturers for that matter).

Uh, yeah, I knew that. My response was an explanation. You don't find Intel and AMD introducing new architectures on new processes (typically) as even they need time to get used to them. Since GPU architectures change pretty rapidly, they're more likely to go with a familiar manufacturing process instead of trying to do two pretty complex things at the same time. This is also why GPU makers do more incremental reduction on the manufacturing process size.
 

DrMrLordX

Lifer
Apr 27, 2000
23,004
13,110
136
On the contrary, while AMD has been leaning on K8 for some time, Intel has actually been juggling several different architectures simultaneously and producing them (and derivatives) on multiple different processes.

They've produced Netburst on the 180nm, 130nm, 90nm, and 65nm processes in single and dual-core variants (90nm and 65nm)
Pentium M was produced on 130 nm (Banias), 90 nm (Dothan), and 65nm (Yonah)
Core 2 has been produced on 65nm and will make the jump to 45 nm (and Core 2 isn't that closely related to Pentium M despite what some say)
Then there's Itanium which has been produced on 180nm, 130nm, 90nm, and 65nm (provided that Montvale is actually released, is it?)

That doesn't include the other chips Intel produces, such as northbridge and southbridge chips for motherboard chipsets, of which they have made many many revisions in the past few years . . . and so far as I know, they've produced those chips on processes as small as 90nm. I don't know if they're producing nb/sb chips on the 65nm process yet.

So, Intel at the very least has been shrinking many different architectures simultaneously, essentially doing two pretty complex things at the same time (if not more than two). Most if not all of their die shrinks have at least coincided with minor architectural changes to each and every different CPU architecture that has experienced a process shrink. I can't think of a single optical die shrink they've performed since the introduction of the Pentium 4, with the possible exception of Presler/Cedar Mill which still had some modifications vs Smithfield/Prescott (l2 cache sizes were different).

I don't think Nvidia and ATI can really use the excuse that their constantly-shifting GPU architectural shifts necessitate slow adoption of process shrinks. They introduce a new GPU once every 12-15 months which, compared to what Intel has done at least, isn't that big of a deal when you take into account that that one GPU design will find its way into nearly every product over 2-3 product refresh cycles.

Both companies (even now that AMD owns ATI) are saddled with R&D budgets much lower than that of Intel's, so they get less work done over time.
 
Mar 11, 2004
23,444
5,852
146
Originally posted by: DrMrLordX
On the contrary, while AMD has been leaning on K8 for some time, Intel has actually been juggling several different architectures simultaneously and producing them (and derivatives) on multiple different processes.

They've produced Netburst on the 180nm, 130nm, 90nm, and 65nm processes in single and dual-core variants (90nm and 65nm)
Pentium M was produced on 130 nm (Banias), 90 nm (Dothan), and 65nm (Yonah)
Core 2 has been produced on 65nm and will make the jump to 45 nm (and Core 2 isn't that closely related to Pentium M despite what some say)
Then there's Itanium which has been produced on 180nm, 130nm, 90nm, and 65nm (provided that Montvale is actually released, is it?)

That doesn't include the other chips Intel produces, such as northbridge and southbridge chips for motherboard chipsets, of which they have made many many revisions in the past few years . . . and so far as I know, they've produced those chips on processes as small as 90nm. I don't know if they're producing nb/sb chips on the 65nm process yet.

So, Intel at the very least has been shrinking many different architectures simultaneously, essentially doing two pretty complex things at the same time (if not more than two). Most if not all of their die shrinks have at least coincided with minor architectural changes to each and every different CPU architecture that has experienced a process shrink. I can't think of a single optical die shrink they've performed since the introduction of the Pentium 4, with the possible exception of Presler/Cedar Mill which still had some modifications vs Smithfield/Prescott (l2 cache sizes were different).

I don't think Nvidia and ATI can really use the excuse that their constantly-shifting GPU architectural shifts necessitate slow adoption of process shrinks. They introduce a new GPU once every 12-15 months which, compared to what Intel has done at least, isn't that big of a deal when you take into account that that one GPU design will find its way into nearly every product over 2-3 product refresh cycles.

Both companies (even now that AMD owns ATI) are saddled with R&D budgets much lower than that of Intel's, so they get less work done over time.

That's my whole point right there (bolded and underlined). Their chips have been mostly fairly slight evolutions. Core 2 was a new architecture after how many years of Netburst? They had 65nm Pentium Ds (maybe it was just the Extreme Edition chip?) before they had Core 2 on 65nm. Also their multi-cores have up to this point been pretty much two of their single cores on a single die so its not like it was exactly a major change there either. Yes Core 2 is a pretty significant development, but its nowhere near the change we saw going from the 7900s back in March to the 8800s in November.

Don't get me wrong, I'm not saying its not difficult to even engineer small changes into CPUs, but GPU development is more drastic (possibly because there's a lot more room for them to grow since GPUs are relatively young compared to CPUs and at some point it'll level off as well).
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
The reason in simple. The video card companies do not own their own fabs and do not have huge budgets to be able to afford cutting edge fabs. They usually rely on Taiwan Semiconductor manufacturing company (tsmc) or United Microelectronics Company and those companies are a little behind the curve with equipment.

It has nothing to do with architechture. Since the gpus typically have more transistors than cpus, gpus would benefit more from earlier die shrinks than cpus. The gpu companies simply do not have the resources to attain or contract with such high end facilities.
 

DrMrLordX

Lifer
Apr 27, 2000
23,004
13,110
136
Originally posted by: darkswordsman17

Don't get me wrong, I'm not saying its not difficult to even engineer small changes into CPUs, but GPU development is more drastic (possibly because there's a lot more room for them to grow since GPUs are relatively young compared to CPUs and at some point it'll level off as well).

My point is they're modifying multiple architectures at the same time, which is in and of itself just as hard as developing a single new archtiecture every year or so. In fact, I'd say Intel is doing more than ATI or Nvidia has done, and is moving forward on process technology at a faster clip at the same time. I sincerely hope that ATI, having been bought out by AMD (who in turns licenses process tech from IBM), will start shrinking dies faster than before. Of all the components that absolutely need die shrinks and other process refinements, GPUs are at the top of the list these days.

I would think the only valid excuse ATI and Nvidia have had for being less aggressive with process shrinks are their comparatively smaller R&D budgets.

Originally posted by: zephyrprime
The reason in simple. The video card companies do not own their own fabs and do not have huge budgets to be able to afford cutting edge fabs. They usually rely on Taiwan Semiconductor manufacturing company (tsmc) or United Microelectronics Company and those companies are a little behind the curve with equipment.

It has nothing to do with architechture. Since the gpus typically have more transistors than cpus, gpus would benefit more from earlier die shrinks than cpus. The gpu companies simply do not have the resources to attain or contract with such high end facilities.

That is a good point. It's easier to push the envelope on process tech when you build your own fabs. And, yes, GPUs desperately need to use less power than they do now!
 

Snooper

Senior member
Oct 10, 1999
465
1
76
Originally posted by: Matthias99
...also, latest-gen GPUs have something like 5-10x as many transistors as latest-gen CPUs. It's really hard to make a chip that big and that fast on a cutting-edge process. They're shrinking them as fast as they can.

Not quite:

Latest, greatest GPU out:
NVidia 8800 GTX - 681M transistors

CPU comparison (Intel at least):
Core 2 Duo (65nm) - 291M transistors
Pentium Dual 900 (90nm) - 376M transistors
Core 2 Quad (65nm) - 582M transistors
Itanium 2 9050 (90nm) - 1.72 BILLION transistors.

All in all, I will give you that the latest GPUs have more transistors than the latest desktop CPUs, but it is only by a factor of 2. And that is completely wiped clean when you look at the new quad core CPUs that are out there.

And the GPU has a LONG way to go to catch up to the latest server CPU.
 

obeseotron

Golden Member
Oct 9, 1999
1,910
0
0
CPUs are fully custom designs, change very little for years, and are fabbed by their designers. GPUs are partially custom but contain lots of off the shelf cells, are only relevant for a year or two, have to be outsourced to fabs, and are generally much larger packages than CPUs.

And just for the record, the Core 2 Quad is 2 chips on one package, not a single 582m transistor chip, and that itanium is like 80% cache