NV: Everything under control. 512-Fermi may appear someday. Yields aren't under 20%

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

happy medium

Lifer
Jun 8, 2003
14,387
480
126
The 2900 is similar to Fermi in many ways... it came out well after the competition so us enthusiasts had higher hopes for them than what materialized. Both are hotter, more power hungry, and louder than their closest competition. Both are/were slower than the competitions fastest part (of course not that black and white seeing as AMD's fastest is dual GPU, therefore some may not choose to use it).

There are many parallels...

Nvidia gets to claim 'fastest single GPU', AMD could only claim that they were around the top four or so. That's really only where I see a huge difference between the two.

Yea ,we are in a different era. Back then there were no dual gpu's. Single gpu's were king. The 7900 gx2.... well sucked. The 3870x2 came much later? Well on the ATI side.

I for one don't believe that ATI was using the 5970 vs the gtx480.
If Fermi would have hit there thermal/power/clock targets,the gamble would have paid off immensely. A dual Fermi card would have blown ATI out of the water.
I think they are very lucky that Nvidea didn't hit the Fermi on target.
It would have been another distaster for them like the gtx 8800 days.
 

yh125d

Diamond Member
Dec 23, 2006
6,886
0
76
I guess it would have worked out that way if they didn't scrap the 32nm process ha?

Probably. A 32nm Fermi would only be a little bigger than 40nm cypress. you could probably pull off a 512sp 800mhz fermi at 32nm that maxed out at maybe 225w (rough guess, probably a bad one)
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
They -will- keep getting bit in the ass with yield issues is they keep selling 500-600mm^2 dies. Maybe not as bad as this generation with TSMC's own problem adding to the issue, but they could compete on price much better if they could squeeze the same performance out of a smaller die

Using GPGPU PPW to legitimize the power draw of a gaming card is fatuous. If NV wants to become big in the GPGPU market that's fine, but they need to keep their GPGPU products and their gaming products separate instead of cramming a compute card and a gaming card into one die, creating a bloated chip that most of GTX480 buyers will never use fully

Why can't NV make a good gaming card AND a good compute card? They don't need to do both with one die. It winds up with most of GTX480 buyers having tons of compute power they don't need/want, and most quadro/tesla buyers with a bunch of gaming prowess they'd just assume replace with more compute power.


It's like trying to build a sedan with a pickup bed and 4WD. You end up making a vehicle with sacrifices all over, and (hopefully) sooner or later you'll realize you should just build two vehicles, a car and a truck


The Subaru Brat? It sucked by the way.

http://images.google.com/imgres?img...s&rlz=1I7ADRA_en&imgtype=i_similar&tbs=isch:1
 

extra

Golden Member
Dec 18, 1999
1,947
7
81
Pretty sure amd was intending for the 5970 to compete with the 480. It didn't because the 480 didn't work out quite like nvidia had hoped, due to TSMC sucking and Nvidia being uhh..."overly optimistic". And people are like, acting like the 480 dominates the 5870 or something. It doesn't at all...it's only a tiny bit faster. You know it's a fail when ATI raises their prices after release, the competition finally releases their cards 6 months later and ATI STILL keeps their prices inflated. GRRRRR.

If this was last generation the 470 should cost like $299 and the 5850 like $250.. but nooooo.... /anyway

As for 28nm refresh later this year--Don't bet on it. I bet we won't see 28nm video cards until Q2 next year (not counting some low end test parts ie 4770).
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
They -will- keep getting bit in the ass with yield issues is they keep selling 500-600mm^2 dies. Maybe not as bad as this generation with TSMC's own problem adding to the issue, but they could compete on price much better if they could squeeze the same performance out of a smaller die

They need to be 'bit in the ass' by yield issues before it becomes an issue. When it interrupts their ability to sell parts as they normally do then it becomes an issue. Them shipping A3 for launch parts is normal, their being constrained by yields early in the life cycle is normal, when it starts to impact their bottom line then it becomes a bite in the ass. Their is a shocking disparity between enthusiast analysis and business reality.

Using GPGPU PPW to legitimize the power draw of a gaming card is fatuous.

And you have so deemed that it is to be a gaming card explicitly? Did you tell nVidia that you have so decided it so it must be so? It doesn't appear that they got your memo dictating what their products are supposed to be :)

If NV wants to become big in the GPGPU market that's fine, but they need to keep their GPGPU products and their gaming products separate instead of cramming a compute card and a gaming card into one die, creating a bloated chip that most of GTX480 buyers will never use fully

I can't even figure out what you are trying to say here, are you implying that nVidia should make a vector processor and use that seperately from a GPU to reach their intended goals? Or are you implying that they should nigh double their R&D to have a seperate Quadro and GeForce line? Either option is significantly more costly on the business end then doing it exactly how they are now. As far as not using the chip 'fully' and it being a waste of die space, how many people that purchased a 5xxx part are using DolbyHD output via HDMI? That is a quick generic example, but the general point is that both companies do have die space that could easily be considered 'wasted' as *most* of their users don't put them to use. That doesn't mean it is a bad idea, in any way shape or form, for them to have the additonal functionality. You may think it is, and to that I would say why is it you don't own a multi billion dollar GPU company yourself and show them how it is done :)

Why can't NV make a good gaming card AND a good compute card?

That would be counterproductive on so many different levels. One of the reasons is that nV is actively pushing to get GPGPU used by more in the mainstream. Chicken and egg. The other is the staggering increase in R&D costs associated with such a strategy. Right now large portions of the costs of developing each part of the GPU are shared, those would have to be doubeld if they were to split the lines up. 3DLabs, Glint and SGI to name a few used to try that more specialized approach, nVidia ended their businesses in those markets using their single monolithic die approach. The R&D warranted by a niche part can't compare to that used for mass market offerings. Going with seperate dies for each task with nigh no performance advantage is a good setup for failure.

It winds up with most of GTX480 buyers having tons of compute power they don't need/want, and most quadro/tesla buyers with a bunch of gaming prowess they'd just assume replace with more compute power.

How many GTX480 owners do you know that don't want the compute power? Quadro is still primarily a 3D workstation card, so yes, most of those users still do want the "gaming" power that it has to offer. As far as Tesla buyers go, I would simply say they can go with the part that offers superior performace/watt that doesn't have GPU functionality on it, oh wait, guess that doesn't exist. I think you will find that people who purchase Tesla parts have no problem with its' price/performance or performance/watt metric as the scales of economy have made GPGPU performance its' own market that nothing effectively competes with. On that front, nV has clearly proven its' utter superiority. The only question at this point is how it will work out for them in the general consumer/gamer market. A whole bunch of people on this board seem convinced that it is a failure. This forum also overwhelmingly thought the 4xxx parts were superior to the last generation nV offerings while nV outsold ATi 2:1. If nV sees a large sales decline and has difficulty in turning a profit using their current strategy then you can be sure they will reconsider their approach. To date, it has worked out very well for them.

It's like trying to build a sedan with a pickup bed and 4WD.

No, it's more like developing a high performance engine and dropping it into your regular cars to spread out the R&D costs. Worse fuel mileage, more weight, higher cost and rarely do you hear anyone complain outside of fans of the competition.
 

thilanliyan

Lifer
Jun 21, 2005
12,085
2,281
126
for one don't believe that ATI was using the 5970 vs the gtx480.
If Fermi would have hit there thermal/power/clock targets,the gamble would have paid off immensely. A dual Fermi card would have blown ATI out of the water.
I think they are very lucky that Nvidea didn't hit the Fermi on target.
It would have been another distaster for them like the gtx 8800 days.

How so? You ignoring the 5970 doesn't mean it doesn't exist. The 8800GTX had absolutely no competition...literally. Even if Fermi had hit shader/clock targets, the 5970 would still be competitive. It wouldn't be like the 8800GTX launch at all. A dual Fermi card is a big IF, so no point debating that.
 

sandorski

No Lifer
Oct 10, 1999
70,879
6,417
126
mss.jpg

/thread
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
In a nutshell thats what I was saying, but I just dont see a refreshed 58xx series scaling as good as a gtx series by looking at the overclocked benchmarks

I see Nvidia exstending there performance lead and ATI once again giving up and lowering prices and settling for second.:hmm:

Hilariously out of touch post... in case you havent realized currently Fermi has ZERO headroom whatsoever while ATI has ~50% headroom and with a respin this just gets worse: ATI can essentially simply release a dual-5970 single card monster and a single GPU card with 5970 performance, both at higher clock, killing everything NV can pull together form this damned Fermi.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
That is a lot of ifs. We really have no idea what the 6000 series will do and what process it will be on. The last I heard it was to be on the next process iteration. Who the hell knows when that will be ready. And what do you meant by development lead? These projects start years ago. Nvidia and ATI already habr successors in development for this and next generation. I have a feeling the 460 will help a lot in total units shipped for DX11 chips on the green side once it hits the streets in June.

People also seem to forget if OpenCL takes off AMD needs to play catchup in compute GPU in a big way. It is possible to do so will require they develop as complex a chip as Fermi with all the problems a huge monolithic chip brings from a manufacturing perspective.

The 460 is looking to cost close to $300 and will not be faster than a 5850, who is going to buy them ?

It's naive to think that ATI's new series will not make up the 15% lead the 480 has on the 5870 and then quite a bit more, when has a brand new series not brought large performance gains.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
How so? You ignoring the 5970 doesn't mean it doesn't exist. The 8800GTX had absolutely no competition...literally. Even if Fermi had hit shader/clock targets, the 5970 would still be competitive. It wouldn't be like the 8800GTX launch at all. A dual Fermi card is a big IF, so no point debating that.

What I wrote was my opinion. I wasn't ignoring anything. It wasn't meant to offend you. and I wasn't debating. Just having a good educational conversation with some of our good members.

To the guys I was talking to...edplayer, slowspyder, shaq, yh125, lonyo.
Nice talking to ya, must be past 5 oclock.
The trolls are comming. Looks 2 posts up.
I'm out.
Calling other posters "trolls" does not add anything to this thread. If people post trolling replies, simply click the "report post" button. Thanks. -Admin DrPizza
 
Last edited by a moderator:

Genx87

Lifer
Apr 8, 2002
41,091
513
126
The 460 is looking to cost close to $300 and will not be faster than a 5850, who is going to buy them ?

It's naive to think that ATI's new series will not make up the 15% lead the 480 has on the 5870 and then quite a bit more, when has a brand new series not brought large performance gains.

Where hae you heard it is going to cost close to 300 bucks? I saw an article that thinks it will sit between 200-250.

Without knowing exactly what is in the new series it is just guessing. They could just tweak the current GPU and call it a day.

Edit: NM didnt see the other thread. Guess we will have to wait and see. I think it will be MSRP of 249-259 personally.
 
Last edited:

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Hilariously out of touch post... in case you havent realized currently Fermi has ZERO headroom whatsoever while ATI has ~50% headroom and with a respin this just gets worse: ATI can essentially simply release a dual-5970 single card monster and a single GPU card with 5970 performance, both at higher clock, killing everything NV can pull together form this damned Fermi.

I'd love to see them to sandwich a quad core GPU into a single card. And sell it for 899.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Where hae you heard it is going to cost close to 300 bucks? I saw an article that thinks it will sit between 200-250.

Without knowing exactly what is in the new series it is just guessing. They could just tweak the current GPU and call it a day.

Edit: NM didnt see the other thread. Guess we will have to wait and see. I think it will be MSRP of 249-259 personally.

My thread.
The projected price for the 460 is 279/299$. Total crap! :(

http://forums.anandtech.com/showthread.php?t=2070861
 

yh125d

Diamond Member
Dec 23, 2006
6,886
0
76
What I wrote was my opinion. I wasn't ignoring anything. It wasn't meant to offend you. and I wasn't debating. Just having a good educational conversation with some of our good members.

To the guys I was talking to...edplayer, slowspyder, shaq, yh125, lonyo.
Nice talking to ya, must be past 5 oclock.
The trolls are comming. Looks 2 posts up.
I'm out.

T2k is like the fat kid that shows up to the party and bums everyone out

2 point infraction. Next time, it's a vacation for personal attacks. -Admin DrPizza
 
Last edited by a moderator:

thilanliyan

Lifer
Jun 21, 2005
12,085
2,281
126
No, it's more like developing a high performance engine and dropping it into your regular cars to spread out the R&D costs. Worse fuel mileage, more weight, higher cost and rarely do you hear anyone complain outside of fans of the competition.

I'm not disagreeing with your points that came before what I quoted, but how is what you describe in your analogy a good thing? Who would not complain of a regular car that has worse fuel mileage, more weight, and higher cost than a similar car that doesn't have those deficiencies? Everyone looking for a regular car should complain about that. Car makers usually give a range of engine choices so the analogy to having one GPU to fit all doesn't really work. Or maybe I'm missing something in your post?


What I wrote was my opinion. I wasn't ignoring anything. It wasn't meant to offend you. and I wasn't debating. Just having a good educational conversation with some of our good members.

I posted to show you that the Fermi launch (even IF it had met original targets) would not have been like the 8800GTX launch...that is all. Don't worry I wasn't "offended".
 
Last edited:

rgallant

Golden Member
Apr 14, 2007
1,361
11
81
Probably. A 32nm Fermi would only be a little bigger than 40nm cypress. you could probably pull off a 512sp 800mhz fermi at 32nm that maxed out at maybe 225w (rough guess, probably a bad one)

-just a what if
-what are the chances NV opted not to go the 32nm route just to cripple ATI -they were ready and NV just released on 40nm -not ready. To plan a 32nm node and to change that something does not smell right.
-could be the last node for ATI before moving to GF , so the big investment would be for ATI only .NV would take the hit with crap products and live .But if ATI came out with new node this year it could have put NV well behind .
-and they might not have known about the 40nm SI.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Yea ,we are in a different era. Back then there were no dual gpu's. Single gpu's were king. The 7900 gx2.... well sucked. The 3870x2 came much later? Well on the ATI side.

I for one don't believe that ATI was using the 5970 vs the gtx480.
If Fermi would have hit there thermal/power/clock targets,the gamble would have paid off immensely. A dual Fermi card would have blown ATI out of the water.
I think they are very lucky that Nvidea didn't hit the Fermi on target.
It would have been another distaster for them like the gtx 8800 days.

You've said 'If Fermi..." a few times now. Fermi is what it is. It's strengths, it's weaknesses are reality now. It's fast. It's hot. It's loud. It uses a lot of power. It has great GPGPU potential. It's very likely going to be a great architecture for Nvidia to build from. Etc, etc.

I don't think it matters back then, or now, how the GPU configuration is. How your games perform are just that, regardless of then, now, GPU/GPU's.

Fermi has it's good points, but I think sometimes you might miss it's negitives with the 'what ifs'... it does have more than a few negitives too. :) It's not that I think Cypress is 'better', but I think it's a more well rounded part that is likely going to be better for a lot of people who care about more than just ultimate performance regardless of price out of a single GPU. Just my $.02

My thread.
The projected price for the 460 is 279/299$. Total crap! :(

http://forums.anandtech.com/showthread.php?t=2070861

Were you expecting Nvidia to price something that is almost as fast (my guess based on the price) as a 5850 for $200? That's really not their style. ;)
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
What I wrote was my opinion. I wasn't ignoring anything. It wasn't meant to offend you. and I wasn't debating. Just having a good educational conversation with some of our good members.

To the guys I was talking to...edplayer, slowspyder, shaq, yh125, lonyo.
Nice talking to ya, must be past 5 oclock.
The trolls are comming. Looks 2 posts up.
I'm out.

Well, when facts go against so-called opinions it's hard to argue for the latter. ():)
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Who would not complain of a regular car that has worse fuel mileage, more weight, and higher cost than a similar car that doesn't have those deficiencies?

Go into the Garage forum, ask the posters there if they think the CTS-V would be a better car if it had the 300hp V6 instead of its' current engine.

Who would not complain of a regular car that has worse fuel mileage, more weight, and higher cost than a similar car that doesn't have those deficiencies? Everyone looking for a regular car should complain about that. Car makers usually give a range of engine choices so the analogy to having one GPU to fit all doesn't really work.

A range of GPUs is like a range of engine options. How many people on this forum argue for the i3's on package GPU as a viable alternative? It is *far* more efficient then any discrete GPU on the market. Once people come to the realization that performance/watt isn't their top priority, and really doesn't even weight that heavily into the discussion until performance reaches a certain level, then we can have a more reasonable discussion about the rade off. What we currently have is the guy getting 15MPG laughing at the guy getting 11MPG, both suck ass. When we enter into the discussion on a logical basis then the question becomes one of if an extra ~40% power useage is worth the modest bump in performance and the big increase in GPGPU functionality. We are talking about a supercharged V8 versus a supercharged V12 and arguing about fuel economy and packaging concerns, it is downright laughable when looked at it on a realistic basis.
 

imaheadcase

Diamond Member
May 9, 2005
3,850
7
76
People comparing power requirements have no business talking about high end cards. simple as that. Its like buying a fast car and crying cause the engine gets hot and you use lots of gas.
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
People comparing power requirements have no business talking about high end cards. simple as that. Its like buying a fast car and crying cause the engine gets hot and you use lots of gas.


Weird thinking. All cars including fast ones brag about gas consumption if it compare well with the competition. Even fix or repair daily has learned that lesson.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
People comparing power requirements have no business talking about high end cards. simple as that. Its like buying a fast car and crying cause the engine gets hot and you use lots of gas.

http://www.supercars.net/cars/3335.html

In the race, the advantage in fuel consumption of the Audi TDI Power was visible for the spectators too: on average, the Audi drivers only pitted every 14 laps to refuel 90 litres of Shell V-Power Diesel. The opposition, who relies on petrol engines, had to pit considerably more often.