AMD about to make a monumental leap in graphics?

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
http://vr-zone.com/articles/amd-s-l...entire-platforms-till-2011/7970.html?doc=7970

Apparently a leaked AMD roadmap shows a 32nm mobile graphics chip called "Manhatten coming out in 2010. Since Evergreen is also marked as a 2010 product, it seems the roadmap may be aligning with quarterly reports (so evergreen profits get reported in 2010), meaning a 32nm mobile graphics chip could be seen from AMD by next summer.
I'd imagine Manhatten is a 32nm shrink of Evergreen.

And then in 2011 (so between Fall 2010 and Summer 2011), AMD has a 32nm "N. Islands" graphics family. This basically confirms that AMD's nextgen of graphics chips will be using Global Foundries, and if their 32nm fab works out well, could be a very large improvement over current 40nm graphics chips. Much larger than the 4xxx to 5xxx series, and could also mean an IGP that's competitive with current midrange ($50-$70) parts.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Considering that process node cadence for "full" nodes is ~2yrs, and that process node cadence when including "half" nodes is ~1yr, and that the first 40nm IC from AMD was RV740 and debuted on April 28, 2009, is it really all that surprising to anticipate the next half-node integer transition (40nm->32nm) to happen in 2010 essentially one year from the introduction of 40nm?

Where's the monumental leap? This is just same-old same-old node cadence unless I am missing something here.
 

magreen

Golden Member
Dec 27, 2006
1,309
1
81
Considering that process node cadence for "full" nodes is ~2yrs, and that process node cadence when including "half" nodes is ~1yr, and that the first 40nm IC from AMD was RV740 and debuted on April 28, 2009, is it really all that surprising to anticipate the next half-node integer transition (40nm->32nm) to happen in 2010 essentially one year from the introduction of 40nm?

Where's the monumental leap? This is just same-old same-old node cadence unless I am missing something here.
+1
That's how I read it as well.

IDC... the new forums ganked your elite status!
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
IDC... the new forums ganked your elite status!

I feel so naked...dang'it where the heck did the "laugh" smilie go?

Yeah now I gotta go get this ego5000+ wet-ware chip removed from my skull now that I'm no longer l33t...oh well, it was a good ride. If I had to do it all over again I'd imbibe more on the coke and hookers, but that's just me personally ;)
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Given how crap 40nm is, I'm not surprised they are going to be pushing ahead to 32nm.
It might also seem oddly closer because 40nm wide availability is (arguably) behind schedule due to problems.

It seems like 32nm is going to be here really quickly, but in fact 40nm is just late to the party.

Given that TSMC is the only 40nm fab (afaik), anyone who can compete on process technology, such as GF, has a really great opportunity here with the 40nm problems to scoop up business if they can get their 32nm working well and ramp up quickly.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Idontcare, I've asked the forum administration to reinstate your elite title. :)
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Idontcare, I've asked the forum administration to reinstate your elite title. :)

Even if it doesn't happen (admin may decide against it) thank you very much for the consideration and effort :) :$ (<- that's a "blush" smilie according to the forum but to me it reminds me of what my 2yr old looks like when he's missed out on yet another opportunity to use the "big kid potty")
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Considering that process node cadence for "full" nodes is ~2yrs, and that process node cadence when including "half" nodes is ~1yr, and that the first 40nm IC from AMD was RV740 and debuted on April 28, 2009, is it really all that surprising to anticipate the next half-node integer transition (40nm->32nm) to happen in 2010 essentially one year from the introduction of 40nm?

Where's the monumental leap? This is just same-old same-old node cadence unless I am missing something here.


Well, one:
It's a full node.
TSMC's 40nm is everyone else's 45nm node. 65nm to 45nm is a full jump. 45nm to 32nm is a whole jump.
Two:
They're switching to global foundries earlier than I had heard. Global foundries (and the whole IBM alliance thing) is generally superior, even on the same process node, to TSMC. Virtually all current 45nm/40nm nodes are pretty crap, but IBM's upcoming 32nm is supposed to be quite good, ergo you'd expect GF's would be also.

So the potential exists not just for a whole node jump (which alone could give a 7900GTX+ to 8800GTX+ type jump), but for greater than that. Of course, GF's 32nm could turn out to be crap (and most likely it's the bulk node, not SOI), but the potential exists for 4x as many transistors and 2x the clock speed.
 

v8envy

Platinum Member
Sep 7, 2002
2,720
0
0
So what we have here is about one quarter for Fermi on 40nm before it has to transition to 32nm or face competition with a 58XX on 32nm if the GT300 arrives in late March? Ouch. That's not a very long time to pay back R&D. I'm a bit surprised they don't just scrap the whole thing and shoot for 32nm right out of the gate.

No wonder the hype is all about GPU computing. For the current killer app (gaming) the success-through-no-fallback-strategy approach may backfire in a big way.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
So what we have here is about one quarter for Fermi on 40nm before it has to transition to 32nm or face competition with a 58XX on 32nm if the GT300 arrives in late March? Ouch. That's not a very long time to pay back R&D. I'm a bit surprised they don't just scrap the whole thing and shoot for 32nm right out of the gate.

No wonder the hype is all about GPU computing. For the current killer app (gaming) the success-through-no-fallback-strategy approach may backfire in a big way.

Nvidia likely wouldn't be able to make that transition. TSMC's 32nm(28nm) will be behind GF's (and may be inferior), and GF only recently started courting external customers, so it's very unlikely any nvidia products could transition to GF for a while.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Well, one:
It's a full node.
TSMC's 40nm is everyone else's 45nm node. 65nm to 45nm is a full jump. 45nm to 32nm is a whole jump.
Two:
They're switching to global foundries earlier than I had heard. Global foundries (and the whole IBM alliance thing) is generally superior, even on the same process node, to TSMC. Virtually all current 45nm/40nm nodes are pretty crap, but IBM's upcoming 32nm is supposed to be quite good, ergo you'd expect GF's would be also.

So the potential exists not just for a whole node jump (which alone could give a 7900GTX+ to 8800GTX+ type jump), but for greater than that. Of course, GF's 32nm could turn out to be crap (and most likely it's the bulk node, not SOI), but the potential exists for 4x as many transistors and 2x the clock speed.

Well if that's all it takes to make unqualified claims of monumental progress then why not just go for the gusto and claim its actually two or three node equivalents while you are at it? Why hold back and just call it a measly single node jump?

Let me give it shot...

"Holy fricken shiza, AMD is set to make astronomically insane jump in graphics as they transition from what is basically 500nm technology (TSMC's 40nm is the same as everyone else's 0.5um tech) to what we all know and expect to be the essentially 1nm technology (GF's 32nm hitherto non-existent bulk-Si CMOS process tech is essentially better than the shit Intel will be pimping in 2022 for 4nm yo), IT IS TRUE I saw it on a japanese leaked internal marketing slide dudez!"

(I'm new at this, hope I did the whole hype thing correctly)

There, did I succintly capture the gist of your argument against the staid, yet factual, contents of my prior post?
 

nOOky

Diamond Member
Aug 17, 2004
3,223
2,276
136
A monumental leap would be for the supply of their new cards to be adequate to the demand, and hold the price down to the level at which it was introduced...
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
A monumental leap would be for the supply of their new cards to be adequate to the demand, and hold the price down to the level at which it was introduced...

Trouble is the hd5850 is similar in value to the 8800 GT (when it was released)....and ATI is a small company that isn't used to grabbing market share.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
..and that's why it would be a monumental leap!

Yep.

I guess ATI didn't order enough silicon because they expected GT300 to be out around the same time.

But seriously I am impressed with their HD5850 (Overclocks like mad, yet even at stock speeds is faster than 285 GTX....all for a price of $259.99 MSRP)

Too bad ATI didn't make enough of them because I could see this card staying around for a long time (like the 8800 GT does to this day)
 
Last edited:

nOOky

Diamond Member
Aug 17, 2004
3,223
2,276
136
Yep.

I guess ATI didn't order enough silicon because they expected GT300 to be out around the same time.

But seriously I am impressed with their HD5850 (Overclocks like mad, yet even at stock speeds is faster than 285 GTX....all for a price of $259.99 MSRP)

Too bad ATI didn't make enough of them because I could see this card staying around for a long time (like the 8800 GT does to this day)

Indeed. I wanted one, couldn't find one due to being unlucky and not having the time to keep looking. I settled for a 4890. I guess that'll hold me for a while, but I really wanted the 5850. And of course the median price is now $289 on these hard-to-find cards.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Well if that's all it takes to make unqualified claims of monumental progress then why not just go for the gusto and claim its actually two or three node equivalents while you are at it? Why hold back and just call it a measly single node jump?

Let me give it shot...

"Holy fricken shiza, AMD is set to make astronomically insane jump in graphics as they transition from what is basically 500nm technology (TSMC's 40nm is the same as everyone else's 0.5um tech) to what we all know and expect to be the essentially 1nm technology (GF's 32nm hitherto non-existent bulk-Si CMOS process tech is essentially better than the shit Intel will be pimping in 2022 for 4nm yo), IT IS TRUE I saw it on a japanese leaked internal marketing slide dudez!"

(I'm new at this, hope I did the whole hype thing correctly)

There, did I succintly capture the gist of your argument against the staid, yet factual, contents of my prior post?

Assuming TSMC and GF's processes were equivalent, it's still a full node jump, instead of the half nodes we normally see. (well, it's a half node jump in labels, but from what I've read of tsmc's 40nm process, it's characteristics are closer to a 45nm process, but maybe that's typical of half nodes shrinks?)
And historically, GF's/AMD's process nodes have been better than TSMC's.
It also puts AMD/ATI significantly ahead of nvidia and at least on par with intel for launching a 32nm gpu.
It could also go horribly wrong, but AMD's roadmaps already have a 6-9 month production buffer of mobile gpus before they attempt a more complex desktop part, so it would seem the only way major issues would pop up is if GF's 32nm process isn't as far along as they say.
 

deimos3428

Senior member
Mar 6, 2009
697
0
0
The big question is, what constitutes a "monumental leap"?

Usually the term is reserved for a dramatic shift in mainstream usage or thinking. Things like language, fire, landing on the moon, or Tivo. If it actually caught on and became the norm rather than an enthusiast-grade exception, something like Eyefinity might qualify.

More likely, AMD is about to continue to put out incrementally better video cards that do little more than output pixels to a screen.

Just to cover all the bases, "I, for one, welcome our self-aware video-card masters." Could happen.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
The big question is, what constitutes a "monumental leap"?

Usually the term is reserved for a dramatic shift in mainstream usage or thinking. Things like language, fire, landing on the moon, or Tivo. If it actually caught on and became the norm rather than an enthusiast-grade exception, something like Eyefinity might qualify.

More likely, AMD is about to continue to put out incrementally better video cards that do little more than output pixels to a screen.

Just to cover all the bases, "I, for one, welcome our self-aware video-card masters." Could happen.

Ok, not monumental then, but a leap on par with previous graphics cards that caused a large shift in market share, such as the 9700 pro and the 8800gtx.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
IDC, you mad bra?

Yeah I was needlessly pissy there wasn't I, my bad Fox5, not sure why I'm being so easily irritable over this subject matter. Sorry about that, I hope my antagonistic post did not give you much angst.

Assuming TSMC and GF's processes were equivalent, it's still a full node jump, instead of the half nodes we normally see. (well, it's a half node jump in labels, but from what I've read of tsmc's 40nm process, it's characteristics are closer to a 45nm process, but maybe that's typical of half nodes shrinks?)
And historically, GF's/AMD's process nodes have been better than TSMC's.
It also puts AMD/ATI significantly ahead of nvidia and at least on par with intel for launching a 32nm gpu.
It could also go horribly wrong, but AMD's roadmaps already have a 6-9 month production buffer of mobile gpus before they attempt a more complex desktop part, so it would seem the only way major issues would pop up is if GF's 32nm process isn't as far along as they say.

If it is 40nm TSMC -> 32nm TSMC transition then it is half-node regardless the node labeling shenanigans that went on at TSMC with 45nm being relabeled 40nm...(because that resulted in the 40nm half-node being relabeled as their 32nm node).

If it is 40nm TSMC -> 32nm GF then there is very real potential for it to effectively be a full node transition IF GF's 32nm bulk-Si CMOS is comparable to TSMC's 28nm node.

Where things get silly fantasy land like is the assumption that GF's is going to come out of nowhere with a 32nm bulk-Si process node (after years and years of having no leading-edge experience with non-SOI development) that will trump everything TSMC's experienced process development team has been working on.

We can "talk up" GF's connections to IBM's fab eco-system all we like, but we need to be aware it is nothing but talk until GF actually releases a non-SOI process node. Only then can we start justifying assumptions as to the efficacy of GF's bulk-Si development team in comparison to that of TSMC's.

Personally I don't understand why AMD wouldn't want to migrate their GPU's to the same advanced leading-edge SOI-based process tech that they rely on to make low power-consumption high-clockpseed CPU's.

Its not like the benefits of SOI stop when the IC is sold as a GPU, and who wouldn't want their HD6870 to have 40&#37; lower power-consumption at any given clockspeed versus a comparable chip implemented in bulk-Si?

ARM reports 45-nm SOI test chip with 40% power-saving

ARM and Soitec collaborated to produce a test chip to demonstrate the power savings in a real silicon implementation with a well-known, industry-standard core. The goal was to produce a comparison of 45-nm SOI high-performance technology with bulk CMOS 45-nm low-power technology of the same product.

The results show that 45-nm high-performance SOI technology can provide up to 40 percent power savings and a 7 percent circuit area reduction compared to bulk CMOS low-power technology, operating at the same speed. This same implementation also demonstrated 20 percent higher operating frequency capability over bulk while saving 30 percent in total power in specific test applications.

http://www.eetimes.com/showArticle.jhtml?articleID=220301622

Now a 40nm bulk-Si TSMC -> 32nm SOI GF transition would truly be a monumental leap in my opinion, if that is what the leaked Japanese slide is alluding too then I will gleefully rescind my prior criticisms of the thread title.
 
Last edited:

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
I lol'd at the above Holy fricken shiza post, I just had to go to my image shack to pull this back out...

idontcare.jpg
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
http://www.legionhardware.com/document.php?id=858

"Also planned are some 32nm graphics cards for the first quarter of 2010, as AMD start using this new manufacturing process with their low-end products, as they often do. The Radeon HD 5670 (Redwood XT), Radeon HD 5650 (Redwood PRO) and Radeon HD 5550 (Cedar XT) are all 32nm budget parts that should run extremely cool and consume very little power."

What does everyone make of this? How many stream processors for Redwood?
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
http://www.legionhardware.com/document.php?id=858

"Also planned are some 32nm graphics cards for the first quarter of 2010, as AMD start using this new manufacturing process with their low-end products, as they often do. The Radeon HD 5670 (Redwood XT), Radeon HD 5650 (Redwood PRO) and Radeon HD 5550 (Cedar XT) are all 32nm budget parts that should run extremely cool and consume very little power."

What does everyone make of this? How many stream processors for Redwood?
32nm is canceled. I guess they will go with the 40nm.