Fermi possibly delayed til March or April

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

GaiaHunter

Diamond Member
Jul 13, 2008
3,732
432
126
The moment NVIDIA has a functional sellable product, it will release it.

Now a functional sellable product isn't just something that can run a game - look at IDC posts.

So, NVIDIA having working silicon isn't mutual exclusive to NVIDIA not having a functional sellable product - if NVIDIA has a Fermi that performs 20% faster than a Cypress but needs to sell it for $700 it isn't exactly sellable - especially with the 5970 already out; or maybe NVIDIA only gets a small portion of chips that fit the power envelope, etc.

NVIDIA is certainly tweaking its Fermi chip - but while it is possible it is tweaking it to surpass Cypress performance (and basically at this point in time that is increasing clocks - NVIDIA isn't going to do a major architectural change), most likely NVIDIA is tweaking it to increase the number of workable/sellable chips out of a wafer.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Hilarious... you guys have literally ZERO sense of running a business especially in a fiercely competitive market, let alone not having the slightest clue about the history of this market.
Seriously, with this crazy (il)logic I wouldn't let you run even a condom rental business... :D

Oh yeah, I know. It's called 'slowly nudging the mainstream opinion' to a certain direction and it's a fairly basic tactics among astroturfers and PR people.
It's not hard to predict that this much bad news will find its way to the stock prices rather sooner than later so this kind of of gentle 'hushing' of any discussion of the financial implications of Nvidia's failed GT300 design comes across quite obviously, don't you think, dear NV Focus Group member? :p

Well said.

BTW - multi-quote functionality is available in the forum now. :p

Question - anyone think the delay of Fermi will cause AMD to keep their 5xxx series out longer? They have the choice to continue with their "tick-tock" strategy and try and bury NV, or they can sit with their 5xxx series, refine the process, and make more $$$. Thoughts?
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,732
432
126
Question - anyone think the delay of Fermi will cause AMD to keep their 5xxx series out longer? They have the choice to continue with their "tick-tock" strategy and try and bury NV, or they can sit with their 5xxx series, refine the process, and make more $$$. Thoughts?

Since they have less market share I don't see that sitting on their asses is a good strategy.

Also I bet their next gen will have a higher focus on GPGPU than the current one.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
Since they have less market share I don't see that sitting on their asses is a good strategy.
Agreed. They will make even more profit by sticking to their "tick-tock" GPU strategy if it means pulling away from nVidia significantly, so "sitting on their asses" seems like a less desirable course of action.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
As always, IDC, an excellent post. Thank you for taking the time to put it together. It really helps illustrate the relationship between die size and yields. Not that you care about titles, but your "Elite" status here has been earned several times over by now.

Keys and lopri, you can see from IDC's post the reasons why we sometimes include die sizes and yields in our discussions here in video. Each factor can have a significant impact on release dates, street prices and availability once a company begins production.

Oh, I agree. But it's not sometimes. It's just about every single time. I'm not saying there is anything wrong with it, but it does seem that most folks in here are more interested in the financials of these companies than the products they offer for gaming.

Maybe there should be video/CPU financial forums. Just a thought.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
BTW - multi-quote functionality is available in the forum now. :p

Question - anyone think the delay of Fermi will cause AMD to keep their 5xxx series out longer? They have the choice to continue with their "tick-tock" strategy and try and bury NV, or they can sit with their 5xxx series, refine the process, and make more $$$. Thoughts?

I think AMD needs to keep pushing as hard as they can. Never sit on there laurels. If they have a "tick-tock" strategy, let them please stick to it.
 

v8envy

Platinum Member
Sep 7, 2002
2,720
0
0
I think AMD needs to keep pushing as hard as they can. Never sit on there laurels. If they have a "tick-tock" strategy, let them please stick to it.

I think it's a widely recognized fact that the latest recession was over last quarter. If AMD decision makers are at all savvy they'll focus on offering value to entice people to spend their newly acquired cash rather than optimize for lowest operating cost.

So, yeah. I'd say it's highly likely both NV and ATI will spend more on R&D to deliver exciting products in 2010 and 2011. It is less likely to see either of them sit back and just reintroduce existing product for two years.

And financial discussions matter. Market share determines units shipped, which indirectly determines cost due to economies of scale and amortized R&D. At the same time it puts some parameters around future R&D expenditures, marketing, developer support... Discussions of market share, company financial health and component costs are at least as pertinent as those of shader counts and memory bandwidth.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
And financial discussions matter. Market share determines units shipped, which indirectly determines cost due to economies of scale and amortized R&D. At the same time it puts some parameters around future R&D expenditures, marketing, developer support... Discussions of market share, company financial health and component costs are at least as pertinent as those of shader counts and memory bandwidth.

+1. My sentiments as well.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
I think it's a widely recognized fact that the latest recession was over last quarter. If AMD decision makers are at all savvy they'll focus on offering value to entice people to spend their newly acquired cash rather than optimize for lowest operating cost.

So, yeah. I'd say it's highly likely both NV and ATI will spend more on R&D to deliver exciting products in 2010 and 2011. It is less likely to see either of them sit back and just reintroduce existing product for two years.

And financial discussions matter. Market share determines units shipped, which indirectly determines cost due to economies of scale and amortized R&D. At the same time it puts some parameters around future R&D expenditures, marketing, developer support... Discussions of market share, company financial health and component costs are at least as pertinent as those of shader counts and memory bandwidth.

Recession over? Explain that to the highest number of unemployed in decades. We are not anywhere near out of the woods yet. We may have touched on the lowest step of recovery, but one foot or the other is still in the woods. It'll take years to get out of this rut.

And I agree that financial discussions matter to some. I don't think a true gamer cares about the amount of money going to R&D next round while they're running around shooting enemies or completing a quest. What I mean is, this type of discussion has dominated the video forum lately. For the last few gens, probably since G80/R670. It has become more of a "business" oriented forum than anything to do with actually playing games and what hardware is used. I just think it's a little too much IMHO.
 
Last edited:

GaiaHunter

Diamond Member
Jul 13, 2008
3,732
432
126
Recession over? Explain that to the highest number of unemployed in decades. We are not anywhere near out of the woods yet. We may have touched on the lowest step of recovery, but one foot or the other is still in the woods. It'll take years to get out of this rut.

I'm not sure if that unemployment is all a consequence of the recent recession - some of it is - but other part is due to the countries like China and India entering the global economy. That will take some time to rebalanced and after that we, those of us that live on the Western world, might find that we lost some of our buying capability.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Recession over? Explain that to the highest number of unemployed in decades. We are not anywhere near out of the woods yet. We may have touched on the lowest step of recovery, but one foot or the other is still in the woods. It'll take years to get out of this rut.

And I agree that financial discussions matter to some. I don't think a true gamer cares about the amount of money going to R&D next round while they're running around shooting enemies or completing a quest. What I mean is, this type of discussion has dominated the video forum lately. For the last few gens, probably since G80/R670. It has become more of a "business" oriented forum than anything to do with actually playing games and what hardware is used. I just think it's a little too much IMHO.

110% agreed. To me, as a gamer, it's all about the card and it's performance. Financial info is interesting but it's a different subject that keeps getting lumped into card performance discussions because fanboys on both sides try to one up those they're arguing with.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
110% agreed. To me, as a gamer, it's all about the card and it's performance. Financial info is interesting but it's a different subject that keeps getting lumped into card performance discussions because fanboys on both sides try to one up those they're arguing with.
What performance threads have been hijacked with financial/production/market discussions? If that were the case, then it should be stopped since it's a derailment.

But that's not the case here. This thread is about "Fermi delay", and consequences of that are pretty much game. We can't talk about performance here since there are no numbers to speak of yet (Wreckage made a recent thread about it, though. Looks interesting, haven't participated in it yet). Talking about the possible effects to nVidia of Fermi being delayed, finanical or market position-wise, seems to be right on topic to me.
 

v8envy

Platinum Member
Sep 7, 2002
2,720
0
0
Recession over? Explain that to the highest number of unemployed in decades. We are not anywhere near out of the woods yet. We may have touched on the lowest step of recovery, but one foot or the other is still in the woods. It'll take years to get out of this rut.

I'm not sure what employment or lack thereof has to do with anything. The GDP grew 3.5% in the third quarter. Growing the GDP means the opposite of shrinking, so last quarter the economy did the exact opposite of recessing. In a rather spectacular way.

Companies typically fail not during recessions, but during recovery periods if they are slow to act. If an organization is still running on a skeleton staff and bunker mentality while their competition is growing it will become irrelevant and disappear. My point is, NV did exactly the right thing during the recession with the G92 rebadges. Doing that for another 2 years would be... not so good. I'm sure key decision makers at both ATI and NV are looking at last quarter's economic * G R O W T H * as an indicator to switch from maintenance mode to growth mode.

That's why discussions of financials are interesting, even to enthusiasts. It's yet another window to gaze upon the future and speculate.
 

scooterlibby

Senior member
Feb 28, 2009
752
0
0
I'm not sure what employment or lack thereof has to do with anything. The GDP grew 3.5% in the third quarter. Growing the GDP means the opposite of shrinking, so last quarter the economy did the exact opposite of recessing. In a rather spectacular way.

Exactly. 'Recession' is only in reference to a certain number of quarters with negative GDP growth. So, while Keysplayr is right that we're not nearly out of the woods, we are indeed out of the recession. (If I remember correctly, the term 'recessionary gap' does refer to disequilibrium in the labor market, please correct me if I am wrong).Employment is a lagging indicator so it makes perfect sense we would have high unemployment while not being in a recession. The fear, though, is having a 'jobless recovery' and experiencing something like Japan's Lost Decade.

If that happens I will spend my last remaining money on a DX11 card and shut out the world by playing Bad Company 2 and eating beans and rice. I'm sure my wife will understand.
 

Kuzi

Senior member
Sep 16, 2007
572
0
0
We arrive at 167 dies for Cypress and 99 dies for Fermi (not factoring in yield losses).

Factoring in functional yield losses we are expecting net die per wafer to decline to ~90 for Cypress and ~38 for Fermi.

These numbers could be higher depending on the actual D0 of the fab and the methods of harvesting and fusing (converting those 5870's into 5850's, etc) and can also be lower if D0 is larger and as we account for parametric yield (binning, etc).

What does this mean for cost per die? 40nm wafers go for around $7k, all the obvious and previously discussed caveats go into this number so I won't rehash them here, so we'd be expecting Cypress to cost ~$78 and Fermi to cost ~$184 based on the myriad of assumptions we made in this analysis.

Now we know both AMD and NV would like to see their chips fetch them 50% GM, that means Cypress selling for $160 and Fermi selling for $370.

(that's $200 more for the Fermi GPU than the Cypress GPU right there, just getting the chips to the AIB's)

That's a pretty awesome analysis IDC. I would have never thought a Fermi GPU can cost nVidia $184 to make. If we factor in the R&D costs, which I'm sure Fermi's was much more than Cypress, then there is even more reason for nVidia to price their GPUs higher. So if we assume that these numbers are correct, AIB's need to pay around $200 more on Fermi compared to Cypress, then create a card with 1.5GB memory thanks to the 384-bit memory controller. That means a GF100 card with 1.5GB memory can easily cost around the $600-$650 range. If such a card would offer similar performance to a Radeon 5970, then it would be a better buy, because it's single GPU, and has more memory. But my guess is that the GF100 will end up slower than the 5970, and about 20-30% faster than the 5870.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,732
432
126
That means a GF100 card with 1.5GB memory can easily cost around the $600-$650 range. If such a card would offer similar performance to a Radeon 5970, then it would be a better buy, because it's single GPU, and has more memory. But my guess is that the GF100 will end up slower than the 5970, and about 20-30% faster than the 5870.

The article on techreport (http://www.techreport.com/articles.x/17815/4) puts GF100 with a core of 650 MHz, 1700 MHz for the shaders and 4200 MHz for the memory (201.6 GB/s) with 512 CUDA cores.

The shaders clock seem quite aggressive considering the Tesla parts seem to be clocked at 1200 MHz.

With this raw data and no idea of what architecture tweaks will do to GF100 vs GT200 and considering both RV770 and RV870, it seems that 20-30% should be the top performance increase over the 5870. In the article they say something along the lines of GF100 being up to GTX 285 SLI speeds.

So, GF100 up 20% faster than 5870 seems reasonable. With that, it seems 5970 will be faster than the GF100 by a decent margin.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I also don't remember your exact verbage, but a few days back when replying to someone else about how a typical contract would be structured, you mentioned that if the yield is under x percent (maybe you used 20% in your example?) than the wafer would be free. I don't know how many dice Nvidia could get if they were able to get 100% usable silicon from a wafer, but assuming TSMC's process is so borked now, wouldn't they be able to get a good number of parts free right now, and than assuming TSMC gets the process fixed over the next few weeks Nvidia would still get enough Fermi dice to sell for the margins they want?

Neither TSMC nor Nvidia are idiots, of course, so those threshold yield numbers are going to account for the reality of attempting to yield much larger die when it comes to thresholds between the tiered payouts. Otherwise everyone would shoot for 600mm^2 die so they can get that one functional chip per wafer for free.

The other issue here is that its not just a matter of yields but also a matter of capacity. Even if Nvidia were looking at 10 free chips per wafer if all they have access to are 1000 wfrs per month they aren't about to have any kind of volume worth the cost of the infrastructure that must be setup to support releasing a SKU.

Also there are additional risks equals money to characterize, quantify, and mitigate if necessary) involved with chips from low-yielding wafers in that there is rather good precedence for low-yielding wafers to also have lifetime quality issues for the few good chips. Nvidia (nor AMD, or anyone for that matter) wants to seed the market with a handful of chips gathered off of questionable wafers which have high(er) risk of dying early in the field.

For example, one type of lifetime reliability issue that is related to GOI (gate oxide integrity) is so sensitive that if a single chip on a wafer results in a positive for this issue the entire wafer is scrapped even if the remaining chips test out as being fine.

Its a cost of elevating the false-negative rate versus profit-loss of missing out on selling on few true-positives trade-off.

When buying a car the last thing you'd want to find out is that the car assembly line that produced your car is so bad that yours was only one of only five cars out of a 100 assembled that day which actually passed Q&A...you'd probably spend the rest of your life wondering what is wrong with your car too that the Q&A probably missed given how crappy the production standards are at the plant that results in 95% reject rate.

Its the same with businesses, they don't like the idea of elevating their risk of customer returns by shipping a few more true-positives on wafers that had a lot of bad die so they will just assume the remaining handful of true-positives (seemingly good die based on testing) on the wafer are actually all false-negatives (would have tested out as being bad chips had they undergone more rigorous/costly lifetime quality testing).

(note - I am intentionally glossing over a LOT of the details and fine print here for the sake of brevity)

The article on techreport (http://www.techreport.com/articles.x/17815/4) puts GF100 with a core of 650 MHz, 1700 MHz for the shaders and 4200 MHz for the memory (201.6 GB/s) with 512 CUDA cores.

The shaders clock seem quite aggressive considering the Tesla parts seem to be clocked at 1200 MHz.

With this raw data and no idea of what architecture tweaks will do to GF100 vs GT200 and considering both RV770 and RV870, it seems that 20-30% should be the top performance increase over the 5870. In the article they say something along the lines of GF100 being up to GTX 285 SLI speeds.

So, GF100 up 20% faster than 5870 seems reasonable. With that, it seems 5970 will be faster than the GF100 by a decent margin.

If its going to have 50% more transistors it sure the heck better have a skosh better performance.

Quick question for the folks who are paying attention to performance numbers of Evergreen family...if we set aside the topic of RV770 vs. Juniper and why shrinking RV770 seems to have resulted in a less efficient architecture at this time and instead just focus on the performance scaling of a 166mm^2 1.04B xtor Juniper to a 334mm^2 2.15B xtor Cypress where would we expect the performance of a hypothetical 3.0B xtor Evergreen-class GPU to fall in terms of being 10% or 30% or 50% faster than Cypress?

Would the performance scale out to basically be the equivalent of a Juniper+Cypress or is the scaling efficiency of the Evergreen architecture not that high?
 
Last edited:

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
This thread is so bookmarked, Idontcare just shoves great information in our faces :p Gonna take my time and read it through and through a couple of times tomorrow, I feel like I might learn a lot :p

Keep it up! Enlighten me (us?)!
 

v8envy

Platinum Member
Sep 7, 2002
2,720
0
0
Would the performance scale out to basically be the equivalent of a Juniper+Cypress or is the scaling efficiency of the Evergreen architecture not that high?

I think we'll have the answer to your question in a year or so, after ATI has a chance to work out driver kinks. At this point IMO there are too many strange performance anomalies to isolate the silicon for analysis.

It's almost like the shader compilers are emiting code better suited for a different architecture and causing lower % total hardware utilization once you have more hardware than is present on the 4890 or 5770.

I wouldn't be shocked to see 20% better performance for the 58xx series in current titles toward the end of their lifetime. I *would* be shocked if the 5870 doesn't outdo the 295 on at least a few future titles.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,732
432
126
Quick question for the folks who are paying attention to performance numbers of Evergreen family...if we set aside the topic of RV770 vs. Juniper and why shrinking RV770 seems to have resulted in a less efficient architecture at this time and instead just focus on the performance scaling of a 166mm^2 1.04B xtor Juniper to a 334mm^2 2.15B xtor Cypress where would we expect the performance of a hypothetical 3.0B xtor Evergreen-class GPU to fall in terms of being 10% or 30% or 50% faster than Cypress?

Would the performance scale out to basically be the equivalent of a Juniper+Cypress or is the scaling efficiency of the Evergreen architecture not that high?

Guess it would depend on what is the biggest bottleneck of the architecture.

A 2400 shaders with 48 rops and 384 bit-bus could be interesting, but ATI doesn't seem want to mess with bigger than a 256 bit-bus.
 

Zap

Elite Member
Oct 13, 1999
22,377
7
81
Exactly!! Comeon BFG and EVGA!! You know you wanna join the ATI bandwagon!! :D

Uh, what?

EVGA's doing motherboards.

BFG is doing PSUs and complete computers.

Neither is doing ATI.

Suddenly those 4870 1GBs that were available 3 months ago for $125 after MIR or $130 GTX 260s sound bitter sweet.

I think they are hanging out with the $20AR 4GB DDR2 kits from earlier this year. :(

So, the new nvidia card comes out in March.. after the 5XXX series has been out for six months and sold plenty, at least in the coming few months.

Then ATI's 6XXX series is out again in another six months after nv's launch

Yeah, how dare companies not release products at the same exact time as their competitors. Car companies all release next model year cars in September. Why can't GPU companies plan ahead and coordinate for concurrent product launches? :rolleyes:

So it seems when ATI is on top, ATI fans probably seem more logical as they show total fps and/or price/performance ratio. When NVIDIA is on top, NVIDIA fans seem more logical as they show total fps and/or price/performance ratio.

In the end, it seems to be part of the human nature to pick sides, be it a shirt or a hardware company.

Back in high school it was the "I'd rather push my Chevy than drive your Ford" mentality, though I think these days Chevy/Ford has been usurped by Subaru/Mitsubishi in the ADD crowd.

Well said.

BTW - multi-quote functionality is available in the forum now. :p

Indeed. :hmm:
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Member of Nvidia Focus Group

Heat 68-0-0 | Member of TeAm Anandtech F@H

I can't help but notice your sig is devoid of a publicized rig at this time. I would ask, but I know that you couldn't elaborate even if that means what I think it means.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
I can't help but notice your sig is devoid of a publicized rig at this time. I would ask, but I know that you couldn't elaborate even if that means what I think it means.

I have a new rig, but with 2x GTX295's. So don't get excited, hehe.

I just got tired of seeing my long sig and trimmed it down. But you're right, I couldn't elaborate if I had something else.
 

Kuzi

Senior member
Sep 16, 2007
572
0
0
I think Fermi being delayed till January should not be a big problem for NVidia, but if it's not out till March/April time frame, then that could be really bad. Because that would not only mean more sales and OEM design wins for ATI, but also software developers would be coding and tesing their games/programs on ATI cards, which can give ATI a performance advantage in DirectX11/DirectCompute. In the past, it was NVidia that usually had this favorable position.

As for ATI releasing the 6xxx series, we have to remember that this will probably be a half-node 28nm part. TSMC is still having problems with their 40nm process, so it doesn't seem likely that we can see any real quantity of 28nm parts from them till maybe early 2011. I'm not sure, ATI could be going with GlobalFoundries for the next round, and maybe they could do better than TSMC in terms of yields, but again I don't see them having the 28nm process ready for mass production till early 2011. Any thoughts on this IDC? :)