[FUDZILLA] AMD's initial 28nm lineup (SI) will be a die-shrink only

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
Isn't this how all previous ATi card went though?
1.) Shrink Die
2.) Double the Shaders
3.)???
4.) Win
 
Feb 19, 2009
10,457
10
76
That would suck.. for everyone. In a few years dx11 will be heavily used and devs still have to make a game that could scale from the bottom all the way to the top, on multiple dx generations, and limited geometry and shader effects.. just too much work.
 

KingstonU

Golden Member
Dec 26, 2006
1,405
16
81
I just keep seeing more and more reasons to not bother upgrading my 3 year old rig. Never could have imagined being in this position.
 

sandorski

No Lifer
Oct 10, 1999
70,861
6,396
126
Lower Heat, Power Consumption, Noise, Cost. Higher Clocks and more gadgets(Shaders/etc). It should still offer a lot, assuming it is just a shrink.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Isn't this how all previous ATi card went though?
1.) Shrink Die
2.) Double the Shaders
3.)???
4.) Win

Yes but that leaves them open to being hammered by the new arch of their rival. nV did not capitalize on this last time with GT200 (even though it was still the faster chip), we will see if they can do it with Kepler.
 
Feb 19, 2009
10,457
10
76
I wouldn't count on lower TDP (power/heat). They will just cram more into the chips and it ends up being just as power hungry (or more).. but obviously perf/watt would go up by a lot.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Yes but that leaves them open to being hammered by the new arch of their rival. nV did not capitalize on this last time with GT200 (even though it was still the faster chip), we will see if they can do it with Kepler.

I honestly think the single biggest advantage 28nm will have over 40nm is lower TDP. Yes, they're going to perform admirably, but the lower TDP vs. equally performing 40nm will probably be the biggest initial improvement.

I've been hoping that deep down Kepler is as much, if not more, focused on improving chip efficiency than it is slapping on 2x cores, improving the memory controller and calling it good. Not that the newer Fermi iterations aren't much improved in efficiency over their predecessors, I'd just like to see Nvidia make a 512 core chip with a 256-bit memory controller on 28nm that can be clock-for-clock 15% faster than the gtx580.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Evolution of AMD's Graphics Core, and Preview of Graphics Core Next
Eric Demers, AMD Corporate Vice President and CTO, Graphics Division

GPU shader cores have been evolving frequently and significantly at AMD. We introduced our common shader core in 2007 with the HD 2000 series. This introduced the unified VLIW-5 instruction set that we've had since. In late 2010, we introduced the first significant departure from this core architecture, the symmetrical VLIW-4 used in the HD6900 series of products. In this presentation, we will review that evolution, but also present an overview of the next generation of AMD cores under development. This next generation of cores will propel forward its capabilities and continue this evolution.

http://developer.amd.com/afds/pages/keynote.aspx
 

tincart

Senior member
Apr 15, 2010
630
1
0
Yes but that leaves them open to being hammered by the new arch of their rival. nV did not capitalize on this last time with GT200 (even though it was still the faster chip), we will see if they can do it with Kepler.

New arch + new process technology. That worked really well with Fermi.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Was anyone expecting AMD to re-do it's architecture after they just released a new one, one that hasn't even debuted in the mid-range and low-end/entry segments yet? I'd hope AMD would tweak the VLIW4 architecture a bit with the new release, as manufacturers do each release. As others mentioned, 28nm is more about power consumption, clock speeds, and putting more on the die.
 

lifeblood

Senior member
Oct 17, 2001
999
88
91
There are two things to consider that make this release different from other releases:

1. The 40nm problems with Fermi. By building a "pathfinder" card, the 4770, ATI was able to learn critical lessons about 40nm. nVidia got seriously burnt by bringing out a new architecture on a new process. I doubt AMD or nVidia is going to trust TSMC to take such a risk again. (BTW, this is not a ding on nVidia. Had 40nm not had such problems they would have done far better imho).

2. SI is 1/2 of NI, so how much do they really need to change? Both AMD and nVidia have good cards out currently, its the software that's fallen behind. They really only need to tweak what they've got.

I would expect to see either another initial "pathfinder" card or just variations of current architectures. I just don't see either company talking a big risk after the all the troubles TSMC has been having as of late.

Of course, one may take a huge gamble and do something really radical. If it worked out it could pay huge dividends. I just doubt either will given TSMC's recent problems.
 

busydude

Diamond Member
Feb 5, 2010
8,793
5
76
I thought Nvidia also has built a "pathfinder" type card.. its GT 220/210 IIRC.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
I thought Nvidia also has built a "pathfinder" type card.. its GT 220/210 IIRC.

You need a reasonably sized chip for a pathfinder.
GT210/22 are tiny and don't tell you enough really, AFAIK.

http://www.anandtech.com/show/2937/9
NVIDIA however picked a smaller die. While the RV740 was a 137mm2 GPU, NVIDIA’s first 40nm parts were the G210 and GT220 which measured 57mm2 and 100mm2. The G210 and GT220 were OEM-only for the first months of their life, and I’m guessing the G210 made up a good percentage of those orders. Note that it wasn’t until the release of the GeForce GT 240 that NVIDIA made a 40nm die equal in size to the RV740. The GT 240 came out in November 2009, while the Radeon HD 4770 (RV740) debuted in April 2009 - 7 months earlier.
 
Last edited:

cusideabelincoln

Diamond Member
Aug 3, 2008
3,275
46
91
Was anyone expecting AMD to re-do it's architecture after they just released a new one, one that hasn't even debuted in the mid-range and low-end/entry segments yet? I'd hope AMD would tweak the VLIW4 architecture a bit with the new release, as manufacturers do each release. As others mentioned, 28nm is more about power consumption, clock speeds, and putting more on the die.

I would expect a tweaked VLIW4 design, but no big changes. They'll add more shaders, but they may not double up++ on them if they want to keep the die size more manageable.
 

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
Actually that is being optimistic.

The market is inundated with midrange cards that perform around GTX280-GTX285 performance with DX11 support.

http://www.anandtech.com/bench/Product/166?vs=180

That's a silly comparison - to the point where I would say you are both correct. The 460 is a midrange part with GTX 285 performance w/DX11. :)

I have a GTX 275 that needs a good home - and I don't feel guilty selling it all, given that it holds its own nicely with some cards that still retail close to $200 for those who don't hunt deals.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
In theory, since 28nm is a full node they could double the shader count and keep the same die size Cayman has.
 

Mopetar

Diamond Member
Jan 31, 2011
8,528
7,790
136
Evolution of AMD's Graphics Core, and Preview of Graphics Core Next
Eric Demers, AMD Corporate Vice President and CTO, Graphics Division

GPU shader cores have been evolving frequently and significantly at AMD. We introduced our common shader core in 2007 with the HD 2000 series. This introduced the unified VLIW-5 instruction set that we've had since. In late 2010, we introduced the first significant departure from this core architecture, the symmetrical VLIW-4 used in the HD6900 series of products. In this presentation, we will review that evolution, but also present an overview of the next generation of AMD cores under development. This next generation of cores will propel forward its capabilities and continue this evolution.

http://developer.amd.com/afds/pages/keynote.aspx

tincart said:
New arch + new process technology. That worked really well with Fermi.

Have you considered that both points of view can be correct? AMD can overview their next generation core design, but still not use it in the 7000 series.
 

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
Isn't that pretty much the way all AMDs first forays into a new process have been for the last 4 years or so?
 

formulav8

Diamond Member
Sep 18, 2000
7,004
523
126
SickBeast said:
I would be very surprised if this rumour is true, though. I've read that AMD initially wanted to release a "full" Cayman GPU on a smaller process but they ran into manufacturing problems. Really that's just marketing spin for "we didn't time it right".


No, the marketing spin really say's: "We had the chip all set to be on the 32nm process with 1920 shaders and all. But our crappy chip maker couldn't give us a decent 32nm process so we were forced to use a current generation 40nm process for our new generation gpus."

Thats much more accurate...
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
It's sad too, the HD4770 "40nm pipecleaner" debuted 2yrs ago. (it was May 2009, wasn't it?)

If 28nm were sticking to a 2yr node-cadence then we should have seen a 28nm pipecleaner product from AMD this spring.

The more 28nm becomes a 3yr node-cadence from 40nm the less impressed I am going to be with the whole "be teh excited cuz its HKMG y'all" process tech angle.

Both Nvidia and ATI had to drop 32nm designs due to supplier issues. That's what was supposed to come out instead of the 6K and 5GTX series.
 

SHAQ

Senior member
Aug 5, 2002
738
0
76
We have to settle for 580's and 6870's for another year? Meh. I guess the die shrink parade is over. We'll save a fortune in upgrades I suppose.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Both Nvidia and ATI had to drop 32nm designs due to supplier issues. That's what was supposed to come out instead of the 6K and 5GTX series.

I am curious to know what nvidia would of released on 32nm. I think the 580/570 were what the 480/470 were meant to be. But with the issues nv had with 40nm and not wanting to wait any longer to get DX11 cards on the market they released what they had in the 480/470 and spent the next six months getting it right. Then they released the 580/570.

Perhaps Kepler was what they had planned for 32nm and moved it to 28nm.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
We have to settle for 580's and 6870's for another year? Meh. I guess the die shrink parade is over. We'll save a fortune in upgrades I suppose.

We can't say that for certain.

"The folks at DigiTimes say they've gotten word from sources at graphics card makers that the Radeon HD 7000 series, a.k.a. Southern Islands, will hit mass production next month." - TechReport
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
No, the marketing spin really say's: "We had the chip all set to be on the 32nm process with 1920 shaders and all. But our crappy chip maker couldn't give us a decent 32nm process so we were forced to use a current generation 40nm process for our new generation gpus."

Thats much more accurate...
Had the 32nm capacity been more mature, they would have had no problem at all releasing the "full" Cayman.

I'll bet this 32nm issue is also behind the Bulldozer delay, and perhaps Llano as well.