AtenRa
Lifer
- Feb 2, 2009
- 14,003
- 3,362
- 136
Please document this. Because all the published information contradicts your statement.
That is not how your graphs calculate transistor/cost isn't it ???

Please document this. Because all the published information contradicts your statement.
Its 25 vs 35. Thats a 40% increase. Or close to what a direct shrink would give. But I assume that Pascal is an improved uarch and not just a Maxwell shrink. Not to mention nVidia also publicly said that there is no savings per transistor below 28nm.
While nothing as such besides money prevents them from shrinking it. Its simply cheaper to have a 28nm design with more transistors than a 20 or 16nm design. Unlike what its been in the history.
Judging by the delays and other issues, my guess is they are having serious problems with heat, leakage, and probably die yields.
Do you know how much Intel's 14nm is costing? Your post implies you have some information that we don't.
That is not how your graphs calculate transistor/cost isn't it ???
I don't think NV can get enough xtors on a 28nm die to make "Big K", Nvidia's compute/professional series of AIBs to be competitive with 14nm Xeon Phi, so I expect a large 20nm die from them next year. They can keep their mainstream consumer dice on 28nm and possibly deliver good competitive products and prices.
The other option is that TSMC 16FF+ is closer to production than we think and NV will tough it out for ~2 years and release Pascal on that process - but that is too risky for them. AMD has said they will have 20nm GPUs next year (2015 - have no idea when).
I'm pretty sure all of NV branding and product locking hardware/software (g-sync, CUDA, etc.) is all part of an effort to diminish AMD as a competitor without going the route cutting their margins. NV plans to win, because with GFX AIBs sales stagnant, design and manufacturing costs going up - they need to be the last one standing to stay profitable.
Its you claiming they are different. So I await your edvidence. Its not me that should document your claims.
The Intel slide clearly uses the Capital spending times Area per Transistor to calculate the cost per transistor.
The graphs you have posted use different metrics to calculate transistor/cost. You posted them, you should know how they calculated them. You will find they don't use Capital X Area/transistor = Cost/Transistor.![]()
That cost would be even higher than 16FF. And what about yield? Just look at Apple and 20nm. They still struggle with somethig like 60% yield with a die that is what, 100mm2 or less? I cant imagine what a 400-500mm2 would be. 10-15%?
Not to mention initial wafer cost through the roof.
![]()
I don't think GM200 is remotely feasible on 28nm. The projected transistor count required will be way too high to be done on 28nm, IMO
As far as people stating that 20nm isn't going to be used for GPUs, I don't buy it. The higher transistor count alone will be well worth it, what 20nm offers in density just isn't possible on 28nm. And both AMD and NV have stated numerous times in financial release press interviews that they intend to pursue 20nm products. If they said that, it will happen. I don't think 20nm is so far out as some suggest, in fact I believe Apple has 20nm products in production right now.
I don't think NV can get enough xtors on a 28nm die to make "Big K", Nvidia's compute/professional series of AIBs to be competitive with 14nm Xeon Phi, so I expect a large 20nm die from them next year.
I think you should listen to the webcast with Mark Bohr. Then you would understand the graphs better.
But again, you claim they use something else. But I dont see any edvidence from you.
And 16FF will only cost more. Unlike previous, there is no cheaper transistors for anyone sofar but Intel when going below 28nm.
This is what AMD and nVidia have to deal with for GPUs:
![]()
Even if you shrink say a Maxwell GPU to 16FF in end of 2017. It will still cost more than it does today. And by that time, it will cost 60% more than the 28nm edition. Something that havent happend before.
Or perhaps better expressed by Samsung:
![]()
Let me remind you of your own post.
And here is how Intel measures cost/transistor (Mark Bohr used the same graph in his 14nm presentation)
It is funny that you quote graphs without even know what they represent.
Well from the start, they don't measure the same thing. Your graph says Gate Cost per 100M gates or transistors. But that is not the biggest difference of the two.
IBS graph scales with time, Intel's graph is static. That is because in IBS graphs you quoted, they also use Yields and Process subsidization. Intel graph only use static metrics to calculate cost per transistor.
So the two graphs are using different metrics and the end result cannot directly be compared between the two.
Hope that was useful to you and others.![]()
Yeah, Apple is around 70%-80% now, according to something I recently read in this forums (don't know where exactly, atm). I'm sure TSMC will be able to hit those numbers for large dies within 6 months (they now have three process R&D shifts working 24x7 on 20nm yields and 16FF development).
As far as wafer costs, I've seen several different graphs now with a variation ~$5.7K to $10K/wafer @20nm. The IBS graph uses initial wafer pricing - for all I know this is risk production, prior to HVM and hence the reason for the exorbitant costs. If 16FF+ had an HVM cost of $16K+/wafer - nobody would be using it (and yet, most of TSMC's customers will skip 16FF in favor of 16FF+). So, while I expect costs are likely to go up; I don't expect that they will come close to the values list by IBS once a process matures enough for HVM (and as yields go up, more designs will go into HVM and that will push prices even lower).
So double patterning isn't killing any HVM manufacturer. Quad patterning (becoming more likely unless there is a breakthrough in EUV) will make matters worse. From what I understand, Quad patterning will need much more expensive masks with extremely tight tolerances, designed via very complex EM algorithms.
Your comparision doesnt make sense. The GTX480 is more future proof than the 5870. So the difference is much smaller than between the 5870 and 290x.
On the other hand: AMD went from 180W to >300W while nVidia stayed at ~260W.
RS, any opinion on how long after the 880 release would it be before the 880Ti is released?
No plans/thoughts to get 880Ti unless pricing is good, but I am curious per above question? It's funny I almost went with the 780 ($650) when it was first released but after reading one of your post on pricing back (at $650) then, I decided not to. I CF my 7950 instead and never regretted it, and only sold due to the mining craze for a healthy profit.
Thanks for that post that made me decide against the 780 at $650.00!![]()
Is this assuming 28nm still? I don't see them making this chip on 28nm because they don't have the ability to make the die much bigger. Assuming 20nm and traditional Nvidia flagship performance upgrades between nodes, I would venture more like 75%+ from 780ti
That's an interesting comparison there.
The time frame for AMD's 5870 to R9 290x is more than 4 years. The time frame from GTX480 to GTX780 is a full year less. Also, AMD's power usage has shot way up in those comparisons, while Nvidia's actually went down.
I know you're comparing "flagship" products prices, but AMD went from having a significant lead time in architecture advancement and way more efficient numbers, to actually falling behind in efficiency and even being passed up architecture replacements. Does that trend stop or reverse? If not, are we looking at AMD entirely relegating (and accepting) themselves to second fiddle in the GPU market in 18 months?
So you're using the dates you bought the cards, not when they were first available?
How does this help prove any points?
Besides, you didn't even use the example from your post that he quoted. All his numbers are correct, if you use what he quoted instead of your own 4890 to 7970 example that only pertains to you.
Maybe Blackened is right and 880 ends up > 10% faster than 780Ti but so far it's not looking good when sites like KitGuru claim that they have physically seen the die and it's a 300mm2 die and the card is not even going to reach 780TI speeds.
Yet another person who completely missed the point of my post. Obviously 480 was more future proof than the 5870 due to higher tessellation performance which is why the performance increase was not the same as going from 5870 to 290X. That's NOT the point of my post. The point is if you went all AMD or all NV or mixed and matched during the last 3 years, you would get 2-3x the performance increase no matter of you chose AMD or NV along the way, or even mixed and matched.
For example:
HD6970 -> GTX780Ti
HD5870 -> R9 290X
HD5870 -> GTX780
GTX480 -> GTX780
GTX580 -> GTX780Ti
You get the point now? In any of those upgrade paths, in 3 years, you would have gotten 2-3x the performance increase, not 50-55%!
All I am doing is comparing historical performance leaps in recent years from AMD and NV vs. what the 880 is supposed to bring vs. 7970Ghz and they all paint the same picture - 880 at $400 with a 50-55% increase over GTX680/7970Ghz is very disappointing.
So you dont know at all. As I told you, you should have listened to the webcast by Mark Bohr.higher.
I just can't see NV releasing a part the same performance as prior gen. Just....why? I wouldn't understand it, as it doesn't make sense....it wouldn't really excite the market to have a new GPU that's the same as prior gen.