Kitguru : Nvidia to release three GeForce GTX 800 graphics cards this October

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Please document this. Because all the published information contradicts your statement.

That is not how your graphs calculate transistor/cost isn't it ???

nwjyxj.jpg
 

Ajay

Lifer
Jan 8, 2001
16,094
8,112
136
Its 25 vs 35. Thats a 40% increase. Or close to what a direct shrink would give. But I assume that Pascal is an improved uarch and not just a Maxwell shrink. Not to mention nVidia also publicly said that there is no savings per transistor below 28nm.

While nothing as such besides money prevents them from shrinking it. Its simply cheaper to have a 28nm design with more transistors than a 20 or 16nm design. Unlike what its been in the history.

I don't think NV can get enough xtors on a 28nm die to make "Big K", Nvidia's compute/professional series of AIBs to be competitive with 14nm Xeon Phi, so I expect a large 20nm die from them next year. They can keep their mainstream consumer dice on 28nm and possibly deliver good competitive products and prices.

The other option is that TSMC 16FF+ is closer to production than we think and NV will tough it out for ~2 years and release Pascal on that process - but that is too risky for them. AMD has said they will have 20nm GPUs next year (2015 - have no idea when).

I'm pretty sure all of NV branding and product locking hardware/software (g-sync, CUDA, etc.) is all part of an effort to diminish AMD as a competitor without going the route cutting their margins. NV plans to win, because with GFX AIBs sales stagnant, design and manufacturing costs going up - they need to be the last one standing to stay profitable.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
I don't think GM200 is remotely feasible on 28nm. The projected transistor count required will be way too high to be done on 28nm, IMO

As far as people stating that 20nm isn't going to be used for GPUs, I don't buy it. The higher transistor count alone will be well worth it, what 20nm offers in density just isn't possible on 28nm. And both AMD and NV have stated numerous times in financial release press interviews that they intend to pursue 20nm products. If they said that, it will happen. I don't think 20nm is so far out as some suggest, in fact I believe Apple has 20nm products in production right now.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Judging by the delays and other issues, my guess is they are having serious problems with heat, leakage, and probably die yields.

Do you know how much Intel's 14nm is costing? Your post implies you have some information that we don't.

No I have no idea how expensive Intels 14nm is and neither does anyone else in this forum. But I can tell you and I believe anyone in the field will agree with me that 14nm is more expensive than 22nm was at the same stage, not to mention today after almost 3 years in production.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
I don't think NV can get enough xtors on a 28nm die to make "Big K", Nvidia's compute/professional series of AIBs to be competitive with 14nm Xeon Phi, so I expect a large 20nm die from them next year. They can keep their mainstream consumer dice on 28nm and possibly deliver good competitive products and prices.

The other option is that TSMC 16FF+ is closer to production than we think and NV will tough it out for ~2 years and release Pascal on that process - but that is too risky for them. AMD has said they will have 20nm GPUs next year (2015 - have no idea when).

I'm pretty sure all of NV branding and product locking hardware/software (g-sync, CUDA, etc.) is all part of an effort to diminish AMD as a competitor without going the route cutting their margins. NV plans to win, because with GFX AIBs sales stagnant, design and manufacturing costs going up - they need to be the last one standing to stay profitable.

That cost would be even higher than 16FF. And what about yield? Just look at Apple and 20nm. They still struggle with somethig like 60% yield with a die that is what, 100mm2 or less? I cant imagine what a 400-500mm2 would be. 10-15%?

Not to mention initial wafer cost through the roof.
0911CderFig2_Wafer_Cost.jpg
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Its you claiming they are different. So I await your edvidence. Its not me that should document your claims.

The Intel slide clearly uses the Capital spending times Area per Transistor to calculate the cost per transistor.

The graphs you have posted use different metrics to calculate transistor/cost. You posted them, you should know how they calculated them. You will find they don't use Capital X Area/transistor = Cost/Transistor. ;)
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
The Intel slide clearly uses the Capital spending times Area per Transistor to calculate the cost per transistor.

The graphs you have posted use different metrics to calculate transistor/cost. You posted them, you should know how they calculated them. You will find they don't use Capital X Area/transistor = Cost/Transistor. ;)

I think you should listen to the webcast with Mark Bohr. Then you would understand the graphs better.

But again, you claim they use something else. But I dont see any edvidence from you.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,112
136
That cost would be even higher than 16FF. And what about yield? Just look at Apple and 20nm. They still struggle with somethig like 60% yield with a die that is what, 100mm2 or less? I cant imagine what a 400-500mm2 would be. 10-15%?

Not to mention initial wafer cost through the roof.
0911CderFig2_Wafer_Cost.jpg

Yeah, Apple is around 70%-80% now, according to something I recently read in this forums (don't know where exactly, atm). I'm sure TSMC will be able to hit those numbers for large dies within 6 months (they now have three process R&D shifts working 24x7 on 20nm yields and 16FF development).

As far as wafer costs, I've seen several different graphs now with a variation ~$5.7K to $10K/wafer @20nm. The IBS graph uses initial wafer pricing - for all I know this is risk production, prior to HVM and hence the reason for the exorbitant costs. If 16FF+ had an HVM cost of $16K+/wafer - nobody would be using it (and yet, most of TSMC's customers will skip 16FF in favor of 16FF+). So, while I expect costs are likely to go up; I don't expect that they will come close to the values list by IBS once a process matures enough for HVM (and as yields go up, more designs will go into HVM and that will push prices even lower).

So double patterning isn't killing any HVM manufacturer. Quad patterning (becoming more likely unless there is a breakthrough in EUV) will make matters worse. From what I understand, Quad patterning will need much more expensive masks with extremely tight tolerances, designed via very complex EM algorithms.
 

jdubs03

Golden Member
Oct 1, 2013
1,257
889
136
I don't think GM200 is remotely feasible on 28nm. The projected transistor count required will be way too high to be done on 28nm, IMO

As far as people stating that 20nm isn't going to be used for GPUs, I don't buy it. The higher transistor count alone will be well worth it, what 20nm offers in density just isn't possible on 28nm. And both AMD and NV have stated numerous times in financial release press interviews that they intend to pursue 20nm products. If they said that, it will happen. I don't think 20nm is so far out as some suggest, in fact I believe Apple has 20nm products in production right now.

I don't think NV can get enough xtors on a 28nm die to make "Big K", Nvidia's compute/professional series of AIBs to be competitive with 14nm Xeon Phi, so I expect a large 20nm die from them next year.

Agreed, there has to be a reduction in die size or at least density increase for there be the performance gains that Maxwell will need. Nvidia can only take 28nm so far and it is obviously at its end as a leading edge process. A 20nm GTX 980 should be out next year, with 16FF small-Pascal? in 2016.

I expect to see an announcement of some sort this year or early next year about Tesla M20 or whatever they'll call it. They're going to need 3-4GFLOPS DP to challenge Xeon Phi. GK110 was announced 11/12 and GTX 780 was out 05/13. Before we see a GTX 980, we'll most certainly see a Tesla-Maxwell announcement. I'm skeptical of seeing a GM200 right now, but hopefully I'm wrong.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
I think you should listen to the webcast with Mark Bohr. Then you would understand the graphs better.

But again, you claim they use something else. But I dont see any edvidence from you.

Let me remind you of your own post.

And 16FF will only cost more. Unlike previous, there is no cheaper transistors for anyone sofar but Intel when going below 28nm.

This is what AMD and nVidia have to deal with for GPUs:
IBS-2.jpg


Even if you shrink say a Maxwell GPU to 16FF in end of 2017. It will still cost more than it does today. And by that time, it will cost 60% more than the 28nm edition. Something that havent happend before.

Or perhaps better expressed by Samsung:
11635d1406145622-sfdsoi2.jpg

And here is how Intel measures cost/transistor (Mark Bohr used the same graph in his 14nm presentation)

nwjyxj.jpg


It is funny that you quote graphs without even know what they represent.

Well from the start, they don't measure the same thing. Your graph says Gate Cost per 100M gates or transistors. But that is not the biggest difference of the two.
IBS graph scales with time, Intel's graph is static. That is because in IBS graphs you quoted, they also use Yields and Process subsidization. Intel graph only use static metrics to calculate cost per transistor.

So the two graphs are using different metrics and the end result cannot directly be compared between the two.

Hope that was useful to you and others. ;)
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Let me remind you of your own post.



And here is how Intel measures cost/transistor (Mark Bohr used the same graph in his 14nm presentation)



It is funny that you quote graphs without even know what they represent.

Well from the start, they don't measure the same thing. Your graph says Gate Cost per 100M gates or transistors. But that is not the biggest difference of the two.
IBS graph scales with time, Intel's graph is static. That is because in IBS graphs you quoted, they also use Yields and Process subsidization. Intel graph only use static metrics to calculate cost per transistor.

So the two graphs are using different metrics and the end result cannot directly be compared between the two.

Hope that was useful to you and others. ;)

So you dont know at all. As I told you, you should have listened to the webcast by Mark Bohr. Intel gets much lower transistor cost from day one. Cant be said about TSMC or Samsung for below 28nm.

Also 20 and 16/14nm never drops below 28nm in those charts. Thats something we havent seen in history before either. So unlike 40nm and higher. 28nm will keep giving cheaper gates than 20,, 14/16nm and so on.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Yeah, Apple is around 70%-80% now, according to something I recently read in this forums (don't know where exactly, atm). I'm sure TSMC will be able to hit those numbers for large dies within 6 months (they now have three process R&D shifts working 24x7 on 20nm yields and 16FF development).

As far as wafer costs, I've seen several different graphs now with a variation ~$5.7K to $10K/wafer @20nm. The IBS graph uses initial wafer pricing - for all I know this is risk production, prior to HVM and hence the reason for the exorbitant costs. If 16FF+ had an HVM cost of $16K+/wafer - nobody would be using it (and yet, most of TSMC's customers will skip 16FF in favor of 16FF+). So, while I expect costs are likely to go up; I don't expect that they will come close to the values list by IBS once a process matures enough for HVM (and as yields go up, more designs will go into HVM and that will push prices even lower).

So double patterning isn't killing any HVM manufacturer. Quad patterning (becoming more likely unless there is a breakthrough in EUV) will make matters worse. From what I understand, Quad patterning will need much more expensive masks with extremely tight tolerances, designed via very complex EM algorithms.

Last I saw was 60%.

Double patterning is already the reason for higher transistor cost today than the previous node. TSMC customer(s) will skip 16FF because TSMC lost Apple to Samsung on that node ;)

What customers have confirmed they will skip and use 16FF+? Broadcom for example have outright said they wont go below 28nm.

Not to mention, who is going to make those chips instead of 28nm if its not directly for the electrical properties? The gate cost at the time 16FF+ may or may not emerge may be twice as high as 28nm. And a GPU scales better with more gates than better electrical properties.
 
Last edited:

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
I do think it is funny to look at the inevitable 20 nm over the horizon, especially considering the almost total concensus on these forums as well as others that Maxwell et al were never going on 20 nm, while both NV and AMD have both stated that the year 2015 will be the year of 20 nm.

But even beyond that, simple logic dictates that 20 nm was the logical choice, because 16/14 nm isn't even close to finished yet and would both of those companies go for 28 nm another year?

That being said, it would be astounding if we saw "big maxwell" for desktops at 20 nm any time soon. The mobility cards sounds like a more probable possibility, and the rumors more or less indicate that, too.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Nobody says they cant go 20nm. Problem is they cant go 20nm and get better price/performance on a transistor cost.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Your comparision doesnt make sense. The GTX480 is more future proof than the 5870. So the difference is much smaller than between the 5870 and 290x.

On the other hand: AMD went from 180W to >300W while nVidia stayed at ~260W.

Yet another person who completely missed the point of my post. Obviously 480 was more future proof than the 5870 due to higher tessellation performance which is why the performance increase was not the same as going from 5870 to 290X. That's NOT the point of my post. The point is if you went all AMD or all NV or mixed and matched during the last 3 years, you would get 2-3x the performance increase no matter of you chose AMD or NV along the way, or even mixed and matched.

For example:

HD6970 -> GTX780Ti
HD5870 -> R9 290X
HD5870 -> GTX780
GTX480 -> GTX780
GTX580 -> GTX780Ti

You get the point now? In any of those upgrade paths, in 3 years, you would have gotten 2-3x the performance increase, not 50-55%!

All I am doing is comparing historical performance leaps in recent years from AMD and NV vs. what the 880 is supposed to bring vs. 7970Ghz and they all paint the same picture - 880 at $400 with a 50-55% increase over GTX680/7970Ghz is very disappointing.

RS, any opinion on how long after the 880 release would it be before the 880Ti is released?

No plans/thoughts to get 880Ti unless pricing is good, but I am curious per above question? It's funny I almost went with the 780 ($650) when it was first released but after reading one of your post on pricing back (at $650) then, I decided not to. I CF my 7950 instead and never regretted it, and only sold due to the mining craze for a healthy profit.

Thanks for that post that made me decide against the 780 at $650.00! :)

I can only go based on existing rumors and they are vague. The rumors are that NV will launch GM200 gaming card sometime in 2015, likely 2H of 2015. I am glad my post made you change your mind on the 780 at $650 and you lucked out selling the 7950s during the height of the mining craze.

3DCenter notes that some research paper shows GM200 for professionals is slated soon, which would suggest that GM200/210 for consumer/gaming GPUs has a strong shot at launching by the end of 2015.
http://techreport.com/news/26907/research-paper-reveals-future-nvidia-gpus

Is this assuming 28nm still? I don't see them making this chip on 28nm because they don't have the ability to make the die much bigger. Assuming 20nm and traditional Nvidia flagship performance upgrades between nodes, I would venture more like 75%+ from 780ti

Well you can play around with the numbers. If GM200 is going to be even better than 50% faster than 780Ti, then it becomes even more favourable in the upgrade path. I used 50% as a conservative increase and even with "just" 50%, a $650 GM200 is already a superior value than an 880 with a 10% boost over 780TI at $400 for 770/7970Ghz users.

That's an interesting comparison there.

The time frame for AMD's 5870 to R9 290x is more than 4 years. The time frame from GTX480 to GTX780 is a full year less. Also, AMD's power usage has shot way up in those comparisons, while Nvidia's actually went down.

You are right. I made a 1 year mistake on the 5870 vs. 290X. Let's substitute HD5870 with HD6970 then and it still makes 880 look underwhelming:

Dec 2010 = 6970 = 100%
Oct 2013 = R9 290X = 220-229%
http://www.computerbase.de/2013-12/grafikkarten-2013-vergleich/10/

So in roughly 3 years, a 2.2-2.3x increase.

I know you're comparing "flagship" products prices, but AMD went from having a significant lead time in architecture advancement and way more efficient numbers, to actually falling behind in efficiency and even being passed up architecture replacements. Does that trend stop or reverse? If not, are we looking at AMD entirely relegating (and accepting) themselves to second fiddle in the GPU market in 18 months?

This isn't really about AMD vs. NV, but how things look for a PC gamer in terms of his/her upgrade path and where 880 falls historically. For example, if I bought an HD5870 (Sept 2009) and in less than 3 years upgraded to HD7970 Ghz (June 2012), I would have gotten 2.3x the increase in performance. An HD6970 user who bought that card Dec 2010 and upgraded to R9 290X would have gotten 2.3x increase as well. Even going from GTX480 to HD7970Ghz would have netted a 73% increase in performance in just 2 years and 2x the performance increase going to a GTX780 in 3 years timeframe from the time 480 launched.

The point is in the last 3 years, the pace of progress was far faster than what 880 is bringing to the table for users with nearly 3 year old GPUs such as a 7970. If the pace was roughly the same, I should be able to buy a card 2-2.5x faster than mine very soon for $550, but there is no such card until 2015. Now you can make the argument that NV is prioritizing power consumption over performance but PC gamers upgrade for more performance and features first and foremost. I am not going to buy a $400-500 card just to save 70W of power. That's not my primary motivation. That's why based on my back of the envelope quick calculation it seems buying a 50%+ faster GM200 for $650 is a better value than a hypothetical 880 for $400 that's only 10% faster than 780TI.

Maybe Blackened is right and 880 ends up > 10% faster than 780Ti but so far it's not looking good when sites like KitGuru claim that they have physically seen the die and it's a 300mm2 die and the card is not even going to reach 780TI speeds.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
So you're using the dates you bought the cards, not when they were first available?

How does this help prove any points?

Besides, you didn't even use the example from your post that he quoted. All his numbers are correct, if you use what he quoted instead of your own 4890 to 7970 example that only pertains to you.

I see what he meant. The movement from that AMD path is 4 years and I made a mistake on that AMD path but you, sontin and Shintai all missed the key points of those comparisons. Even if you both noticed this mistake, you could have easily gotten the point I was getting at:

HD5870 (100%)
R9 290X (295%)
In those 4 years, the performance increased 3x or 75% per year. In 3 years then, the performance increase is 2.25x which is actually correct if we look at other upgrade paths such as HD6970 -> R9 290X. 880 is clearly not meeting expectations of bringing a 2-2.25x performance increase over 680/7970, which is why 680/7970 users will have to keep waiting for GM200 to get this increase in speed.

I outlined many other upgrade paths going only AMD, going only NV and mixing and matching GPU upgrade paths where clearly the performance jumps in the last 2.5-3 years are far greater than what the rumoured 880 is supposed to bring. You guys can defend 880 if you want but the fact remains if a PC gamer is on a 3 year upgrade path, if he upgraded in the last 3 years (whether it was from 480 to 780 or 5870 to 7970Ghz or from 6970 to R9 290X or from 580 to 780Ti or from 6970 to 780, etc.), this gamer would have gotten an increase in performance of at least 2x.

Now some gamer out there with a GTX680/7970 is looking to upgrade every 3 years and that 3 year mark is coming up. Does it look like 880 will be 90-100% faster than those cards? If not, 880 is falling short of expectations.

--

TL; DR -- You buy Card 1 3 years ago and based on historical leaps, performance easily doubles every 3 years from 1 generation to the next. You see Card 2 is about to launch nearly 3 years since you bought Card 1 but it's well short of the doubling the performance bt it's marketed as a "flagship". What positives can be said about Card 2 unless it's significantly cheaper than Card 1 was?
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Maybe Blackened is right and 880 ends up > 10% faster than 780Ti but so far it's not looking good when sites like KitGuru claim that they have physically seen the die and it's a 300mm2 die and the card is not even going to reach 780TI speeds.

If it's only 300 mm^2, then I don't see how it can be as fast as GTX 780 TI, let alone faster. Common sense dictates it's significantly larger than 300mm^2. But if it ends up being only 10-15% faster, then I think GM200 is on 28nm and is coming shortly after in the form of 880 TI and Titan M.

Interestingly, Charlie said there were going to be no less than FOUR GM204 skus. Could GPUz or the leaker at coolaler have gotten the model wrong, OR will GM204 exist as the GTX 880 TI, 880, 870, and 860?
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Yet another person who completely missed the point of my post. Obviously 480 was more future proof than the 5870 due to higher tessellation performance which is why the performance increase was not the same as going from 5870 to 290X. That's NOT the point of my post. The point is if you went all AMD or all NV or mixed and matched during the last 3 years, you would get 2-3x the performance increase no matter of you chose AMD or NV along the way, or even mixed and matched.

For example:

HD6970 -> GTX780Ti
HD5870 -> R9 290X
HD5870 -> GTX780
GTX480 -> GTX780
GTX580 -> GTX780Ti

You get the point now? In any of those upgrade paths, in 3 years, you would have gotten 2-3x the performance increase, not 50-55%!

All I am doing is comparing historical performance leaps in recent years from AMD and NV vs. what the 880 is supposed to bring vs. 7970Ghz and they all paint the same picture - 880 at $400 with a 50-55% increase over GTX680/7970Ghz is very disappointing.

No, what you're doing is taking random numbers and make a comparision.
Between the GTX480 and 290X there is a 2x performance and a 3 3/4 timeframe between.

Between a GTX880 and a 7970 there will be 3 years and ~2x performance cap.

Using a 5870 doesnt make sense because the architecture was outdated back in 2010. A GTX480 is 60% faster in BF4 than the 5870...
 
Last edited:

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Yeah, the 870 you linked there is faster than the 780. And is faster than the 780 with 1600ish cores. So it is doing more work with 700ish fewer cores than the 780, and is faster than the 780. How many cores was the 880 reported to have? 2600ish wasn't it? I don't recall.

I would bet money at this point that the 880 will be at least 25 % faster than the 780ti, based on the point spreads in 3dmark 11. I just can't wait for it to launch, will be quite interesting. I just can't see NV releasing a part the same performance as prior gen. Just....why? I wouldn't understand it, as it doesn't make sense....it wouldn't really excite the market to have a new GPU that's the same as prior gen. But if they get sizable gains, that would excite people potentially depending on the gains. My impression is, NV was never about the price/performance, they aren't out to win price wars. My impression is that they want to be #1 in performance / features and want to drive experiences particularly with software and features. They've never struck me as a company that wanted to settle for less than that. Few missteps along the way, but for the most part, AMD has been the price performance company simply because the market refuses to pay premiums for AMD, and NV has always sought to be the halo GPU vendor.

I mean the 580 was the Fermi redone and it was what...15-20% faster? And that was the GTX 480 which was essentially fixed up with everything enabled. Wasn't a new architecture, nor was it a GK104 to GK110 type of transition.

As far as the price goes...400$? I don't buy it. My prediction based on leaks at this point is 25% better than 780ti with a 550-600$ pricetag. 870 will be a sweet spot part potentially at 400$, i'm guessing. We'll see though. If the predictions of 5-10% are accurate, then a 450$ pricetag could be plausible. Who knows.

Oh yeah, 20nm can't get here soon enough. It's pretty obvious at this point that the transistor gains with 28nm are quite limited.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
So you dont know at all. As I told you, you should have listened to the webcast by Mark Bohr.higher.

Page 34,35 and 36. Mark Bohr's slides to calculate Cost/transistor. They are exactly the same as the one I used earlier.

http://download.intel.com/newsroom/kits/14nm/pdfs/Intel_14nm_New_uArch.pdf

Intel calculations are not using yields and Process Subsidization like IBS slides. So those two are not to be taken to directly compete with each other.
You cannot say only Intel has lower cost/transistor from 22nm to 14nm but TSMC and the others are having higher cost at 14/16nm than 28nm using different formulas.
The faster you realize that the better for you and the rest of the forum.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
GTX480 had 480 Shaders, GTX580 has 512 Shaders. GTX480 was not a complete die, it was harvested.
Also GTX580 had higher frequency and more Tex units.
 

96Firebird

Diamond Member
Nov 8, 2010
5,740
337
126
I just can't see NV releasing a part the same performance as prior gen. Just....why? I wouldn't understand it, as it doesn't make sense....it wouldn't really excite the market to have a new GPU that's the same as prior gen.

I could see them doing it if they can harvest more dies from a single wafer during production. Consumer gets a smaller die chip that performs the same but uses less power, Nvidia gets more dies per wafer which decreases their cost per die.

However, I still think the 880 will be faster than the 780Ti.