RV870 is NOT the codename for next-gen ati-parts

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: SickBeast
Based on the estimated die size of the GPUs shown today, I'm not holding my breath for a performance monster. What we will likely have is a very good midrange GPU. I have a feeling it will match a 4890, or else beat it by 20-30%, while being the same price or cheaper ($150-200).

I'm sure that the economic downturn is going to lead to cheaper GPUs with less features.

Going from ~260mm2 (Radeon 4890) to ~180mm2 would be more in line with a process shrink of a current chip more than adding more 'stuff' to that chip as well as a shrink I would think. The Radeon 4770 die size is just under 140mm2 for comparrison. If the 180mm2 number is to be beleived to be in the ball park, then I guess we'd have to wonder what they can fit in another 40mm2 of silicon. I'm thinking that the picture that AT is estimating from is not truely a picture of the 'Evergreen' or 'RV870' wafer. Just my speculative (and quite possibly incorrect) $.02.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Well, I think that optimization for 1080P resolution is huge right now. We will need more power going forward. ATM there aren't any games that truly need anything better than what we have; you are right.
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
It's probably all a big smokescreen. Only thing I know for sure right now, is that I'm getting excited :)
 

Overlord Keeper

Junior Member
Jul 31, 2009
1
0
0
Every new generation of AMD HD has surpassed the previous generation by almost double the past performance bracket. For instance the 3870 was remarkable yet is now at the same level as the 4670. So I assume the 5670
will perform at the same pace and a bit more than the 4870. The 4770 is
the next line of chip but imagine that same power optimized like the rv770
to the rv790 and thats will still be behind the full power of the 5870. All I
can say is Nvidia had better have something up its sleeves just to keep up.
Like the theories around the GTX395 running quad gpu's yet even that will
lack compared to the 5870x2.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
I'm going to have to agree with the idea that we really don't need more power at this point.
Even with Crysis, so long as you are running 1920x1200 or lower there is enough GPU horsepower available to play everything out right now, unless you are running 2560x1600, but not many folks are.

Yes it would be nice to have 285 SLI or 4890 CF power in a single chip card. But it's more a convenience than necessary.
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
Originally posted by: Overlord Keeper
Every new generation of AMD HD has surpassed the previous generation by almost double the past performance bracket. For instance the 3870 was remarkable yet is now at the same level as the 4670. So I assume the 5670
will perform at the same pace and a bit more than the 4870. The 4770 is
the next line of chip but imagine that same power optimized like the rv770
to the rv790 and thats will still be behind the full power of the 5870. All I
can say is Nvidia had better have something up its sleeves just to keep up.
Like the theories around the GTX395 running quad gpu's yet even that will
lack compared to the 5870x2.

That's a bit inaccurate. The 3870 wasn't even close to two times as fast as the HD 2900XT. The 4670 is slower than the 3870, not faster.

Of course nVidia is going to have something up their sleeves to compete, why wouldn't they? nVidia doesn't intend to use their GT200 to compete with RV870.
 

faxon

Platinum Member
May 23, 2008
2,109
1
81
Originally posted by: Grooveriding
I'm going to have to agree with the idea that we really don't need more power at this point.
Even with Crysis, so long as you are running 1920x1200 or lower there is enough GPU horsepower available to play everything out right now, unless you are running 2560x1600, but not many folks are.

Yes it would be nice to have 285 SLI or 4890 CF power in a single chip card. But it's more a convenience than necessary.

im going to have to disagree on that. while the power is available sure, many places are still selling it at prices that to many are unacceptable. farcry 2 and crysis still lag @ 1920x1200 with any AA options, and thats without even turning up the textures all the way on some cards (HD4850 & 9800GTX 512mb for example). sure, 4890s can be had for $180 usually, but with a new generation of hardware that will bring the price on that level of performance down even lower, to the point that you could fit a GTX285s performance on a single slot card with 1 PCI-e connector in a cramped lanbox that fits in a standard backpack. its jut just about availability, but what you pay for it
 

FalseChristian

Diamond Member
Jan 7, 2002
3,322
0
71
I think because ATI will be 1st outta the gates with a DirectX11 GPU nVidia will see how fast a Radeon HD 5870 will be and compensate accordingly.
 

A5

Diamond Member
Jun 9, 2000
4,902
5
81
Originally posted by: FalseChristian
I think because ATI will be 1st outta the gates with a DirectX11 GPU nVidia will see how fast a Radeon HD 5870 will be and compensate accordingly.

The only way they can do that is pricing. They're way too deep into the engineering section to go change any of the basic specs.
 

Elfear

Diamond Member
May 30, 2004
7,167
824
126
Originally posted by: Grooveriding
I'm going to have to agree with the idea that we really don't need more power at this point.
Even with Crysis, so long as you are running 1920x1200 or lower there is enough GPU horsepower available to play everything out right now, unless you are running 2560x1600, but not many folks are.

Yes it would be nice to have 285 SLI or 4890 CF power in a single chip card. But it's more a convenience than necessary.

I'd disagree as well. I'm still on the edge of playability in some sections of Crysis@1920x1200, and that's using a mixture of High and VH settings in DX9 with 2xAA (cards@970/1100). Mo powuh please!
 

MODEL3

Senior member
Jul 22, 2009
528
0
0
Originally posted by: SlowSpyder
Going from ~260mm2 (Radeon 4890) to ~180mm2 would be more in line with a process shrink of a current chip more than adding more 'stuff' to that chip as well as a shrink I would think. The Radeon 4770 die size is just under 140mm2 for comparrison. If the 180mm2 number is to be beleived to be in the ball park, then I guess we'd have to wonder what they can fit in another 40mm2 of silicon. I'm thinking that the picture that AT is estimating from is not truely a picture of the 'Evergreen' or 'RV870' wafer. Just my speculative (and quite possibly incorrect) $.02.

Yes when I first show the pictures of the die (if the die is for the high end part, or if the 180mm2 estimation is correct) I thought that this is a DX11 "4770" or a DX11 "4890" part.

But ATI can change so many things in the GPU design that it is very difficult for someone to speculate.
I will give 2 examples:

1.ATI was talking about a possible multi-core GPU direction (with shared memory etc..) from the days of XENOS (2005)
So it is very hard to speculate in what performance standpoint and die size its core will be
This scenario has extremely little chance to happen. I speculate DX12 will be the first design for this scenario.

2.This scenario makes much more sense.
They can easily alter the SP design in order to achieve higher clock speeds, like what Nvidia did.
Sure ATI shader processor is a little bit more complex than Nvidia's, but it is simple enough in order for ATI to separate them from the texture units and achieve something like 2X in clock speed (Nvidia G92b SPs is 2,5X the main GPUs clock speed)

For a design like RV770 this means that 400 SP will be left out (nearly 200 million transistors) which gives something like a -20% reduction in the die size (around 210mm2 in 55nm)

So it is very hard to speculate becauce the possibilites are way to many in order to come to a valiable conclusion.



Originally posted by: MarcVenice
I had an interview with Sasa today...

Who is Sasa?

Originally posted by: MarcVenice
Pfff, it's theo valich, former inquirer 'journalist'. Not on my to do list, their wild speculation kinda annoys me though.

Who is theo valich?

Originally posted by: Idontcare
At least you guys at ABT...

What ABT stand for?

Originally posted by: Idontcare
Given the nature of TSMC's 40nm yield issues (it's a parametric problem, not a functional D0 issue per say...

Do you have a link?

Originally posted by: dguy6789
That's a bit inaccurate. The 3870 wasn't even close to two times as fast as the HD 2900XT. The 4670 is slower than the 3870, not faster.

Correct.
2900XT was around -10% in relation with a 3870 (and not in all situations, becauce of 2900XT's 105GB/sec memory bandwidth)

Originally posted by: A5
The only way they can do that is pricing. They're way too deep into the engineering section to go change any of the basic specs.

Exactly, this was and always will be the case with products shipping at the same time or with a few months delay, the one from the other.

 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: gersson
Originally posted by: CrystalBay
Crappy Pic of 5870 ...There seems to be some speculation that this is a dual core on die product ,with X4 configurations possible...

Boy you weren't kidding! ('cappy pic')

They ought to release that card from the gates of chip-hell

Yeah, crappy pic of a fugly card. Although, I'm kind of thinking the HSF may have a shroud on it so we can't see what it really looks like. I'm pretty sure that ATI's AIB partners have gotten used to and are pleased to have a large, flat surface to apply their branding to. I'm pretty sure that if ATI added an unnecessary seam down the center of the HSF their AIBs wouldn't be too pleased.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: MODEL3
Originally posted by: Idontcare
At least you guys at ABT...

What ABT stand for?

Alien Babeltech

Originally posted by: MODEL3
Originally posted by: Idontcare
Given the nature of TSMC's 40nm yield issues (it's a parametric problem, not a functional D0 issue per say...

Do you have a link?

Regarding what specifically? That TSMC has a 40nm yield issue? Or the differences between parametric yield and functional yield?
 

MODEL3

Senior member
Jul 22, 2009
528
0
0

Thanks

Originally posted by: Idontcare
Regarding what specifically? That TSMC has a 40nm yield issue? Or the differences between parametric yield and functional yield?

No, I meant where you got the information that the yield problems are parametric.

If I remember correct these terms, then the most probable thing by far is to be a parametric yield problem especially at 40nm, I just was curious if you knew for sure.

Anyway it doesn't matter i guess.

thanks again



 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: MODEL3
Originally posted by: Idontcare
Regarding what specifically? That TSMC has a 40nm yield issue? Or the differences between parametric yield and functional yield?

No, I meant where you got the information that the yield problems are parametric.

If I remember correct these terms, then the most probable thing by far is to be a parametric yield problem especially at 40nm, I just was curious if you knew for sure.

Anyway it doesn't matter i guess.

thanks again

Ah, I understand now. Yes the specific headline-dominating yield issue with TSMC's 40nm process is parametric - even more specifically it is leakage. That's not to say they don't have the usual functional yield-loss items on their pareto, just saying the number one pareto item is leakage (parametric yield loss) and it is abnormally elevated in magnitude for a node at this stage in its maturity cycle.

Links...there were quite a few public domain discussions that detailed the issue as being leakage related, however in this particular case I happen to know for sure because of reasons relating to my background. That's no substitute for public links though, you have the right to be as wary over my unsupported statements as those made on Theinq or elsewhere.
 

MODEL3

Senior member
Jul 22, 2009
528
0
0
Originally posted by: Idontcare
That's no substitute for public links though, you have the right to be as wary over my unsupported statements as those made on Theinq or elsewhere.

No I didn't mean that.

Before 3 years or something I read about the terms "parametric yield" & "functional yield" and if I remember correctly the parametric loss it was something like 20% (or 30%, I don't remember) of the loss in a yield.

And when I read that the rate for TSMC's 40nm yield was only 30% (now they said that they improved the rate to 60%) I thought that 70% is way high for parametric yield problems only.


 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: MODEL3
Originally posted by: Idontcare
That's no substitute for public links though, you have the right to be as wary over my unsupported statements as those made on Theinq or elsewhere.

No I didn't mean that.

Before 3 years or something I read about the terms "parametric yield" & "functional yield" and if I remember correctly the parametric loss it was something like 20% (or 30%, I don't remember) of the loss in a yield.

And when I read that the rate for TSMC's 40nm yield was only 30% (now they said that they improved the rate to 60%) I thought that 70% is way high for parametric yield problems only.

There are no hard and fast limits or bounds like that on what is parametric yield or functional yield.

Parametric yield starts out at 0% at the time a new node is defined, it is iteratively improved upon until such time (years) that the parametric yield is finally above a minimum threshold criterion viewed internally as being necessary to enable risk-production on the node. Some parameters are more critical than others, so not all parameters get equal priority to address their shortcomings with respect to the spec.

The same is true for functional yield, which has an additional parameter of die-size, and is usually normalized and represented as "D0" for defect density. Fabs are benchmarked for there D0, and consultant reports can be purchased which will tell you the D0 of your competitor's fabs (albeit sanitized by labels, TSMC is not labeled but rather will be called "competitor A" or some such).

Suffice to say you can have 100% yield loss from parametric problems alone, just as you can have 100% yield loss from functional (defect density) problems alone. When it comes to making scrap wafers the sky is the limit. However it is atypical to have a node enter the risk-production phase and still have yield crippling parametric issues. That is simply a sign that the node was absolutely not ready for risk production and needed another 6-12months in the iterative learning cycle phase to get the silicon hitting the parametric specs.

Intel could have, for example, probably sent their 32nm process tech to risk-production a year ago if they wanted to but it would have incurred similar issues with certain parametric parameters not hitting their targets resulting in significant yield loss and debugging issues.

The challenge is balancing the process tech R&D timeline with the IC design/layout timeline as you don't want to go to tapeout and try to get first silicon when the process tech is so immature that it can't hit the parametric targets (you lose a lot of the desired feedback and learning from the first-silicon in this case), but at the same time you don't want to needlessly over-resource your process tech R&D team to the point that they have the node fully ready to go but sitting there unused because there are no devices taped out for it.

In these two-team races there is always going to be one gated by the other, all you can hope to do at a high-level project management perch is try and align the timelines of both teams as closely as possible (which requires an intentional resource imbalance)...when both teams are internal (like Intel, and how AMD used to be) then we in the public sector don't get to witness the disconnects in public theater form. But when you've got two otherwise independent business entities involved (foundry and customer, TSMC and AMD) then it is more likely to make it out into the public domain as that is where the shareholders are and they like to know who to blame when a ball gets dropped.
 

MODEL3

Senior member
Jul 22, 2009
528
0
0
Originally posted by: Idontcare
[There are no hard and fast limits or bounds like that on what is parametric yield or functional yield.

Parametric yield starts out at 0% at the time a new node is defined, it is iteratively improved upon until such time (years) that the parametric yield is finally above a minimum threshold criterion viewed internally as being necessary to enable risk-production on the node. Some parameters are more critical than others, so not all parameters get equal priority to address their shortcomings with respect to the spec.

The same is true for functional yield, which has an additional parameter of die-size, and is usually normalized and represented as "D0" for defect density. Fabs are benchmarked for there D0, and consultant reports can be purchased which will tell you the D0 of your competitor's fabs (albeit sanitized by labels, TSMC is not labeled but rather will be called "competitor A" or some such).

Suffice to say you can have 100% yield loss from parametric problems alone, just as you can have 100% yield loss from functional (defect density) problems alone. When it comes to making scrap wafers the sky is the limit. However it is atypical to have a node enter the risk-production phase and still have yield crippling parametric issues. That is simply a sign that the node was absolutely not ready for risk production and needed another 6-12months in the iterative learning cycle phase to get the silicon hitting the parametric specs.

Intel could have, for example, probably sent their 32nm process tech to risk-production a year ago if they wanted to but it would have incurred similar issues with certain parametric parameters not hitting their targets resulting in significant yield loss and debugging issues.

The challenge is balancing the process tech R&D timeline with the IC design/layout timeline as you don't want to go to tapeout and try to get first silicon when the process tech is so immature that it can't hit the parametric targets (you lose a lot of the desired feedback and learning from the first-silicon in this case), but at the same time you don't want to needlessly over-resource your process tech R&D team to the point that they have the node fully ready to go but sitting there unused because there are no devices taped out for it.

In these two-team races there is always going to be one gated by the other, all you can hope to do at a high-level project management perch is try and align the timelines of both teams as closely as possible (which requires an intentional resource imbalance)...when both teams are internal (like Intel, and how AMD used to be) then we in the public sector don't get to witness the disconnects in public theater form. But when you've got two otherwise independent business entities involved (foundry and customer, TSMC and AMD) then it is more likely to make it out into the public domain as that is where the shareholders are and they like to know who to blame when a ball gets dropped.

Thanks very much, I now understand better the terms.