Kitguru : Nvidia to release three GeForce GTX 800 graphics cards this October

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
IAMD path:
Moving from HD5870 $370 card in Sept 2009 to R9 290X ($550) in December 2013 meant that in 3 years, we got almost 3x the performance increase:
http://www.computerbase.de/2013-12/grafikkarten-2013-vergleich/10/

NV path:
Moving from March 2010 GTX480 $500 (using 7850 as a reference) to May 2013 GTX780 $650 meant that in 3 years, we got 2x the performance increase.
http://www.computerbase.de/2013-05/nvidia-geforce-gtx-780-test/3/

Your AMD path is 4 years. And if we use the the 6970 instead that fits your 3 years number. We are suddenly down to 2.28x.

With nVidia if we use the GTX780ti its not 2x, but 2.48x.

And when you consider the timeline, they are both about equal.
 
Last edited:

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
Ryan Smith just confirmed that Quadro K5200 is in fact heavily crippled GK110 with 256bit memory interface and 650mhz clock
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
Ryan Smith just confirmed that Quadro K5200 is in fact heavily crippled GK110 with 256bit memory interface and 650mhz clock

K5000 which K5200 replaces is 700MHz GK104, 1536 Cuda Cores, 4GB

K5200 is GK110, 2304 Cuda Cores, 8GB

and it's STILL only 1x6pin
 

CrazyElf

Member
May 28, 2013
88
21
81
That TechReport article - remember that FLOPs does not necessarily translate into real world performance, it can only be compared between architectures.

But what it implies is:
- 20nm TSMC is not as predicted happening at high power
- We don't know when 16nm FinFETs will come
- The big gain in the next year will be Maxwell and 2016 will have a smaller gain

Your AMD path is 4 years. And if we use the the 6970 instead that fits your 3 years number. We are suddenly down to 2.28x.

With nVidia if we use the GTX780ti its not 2x, but 2.48x.

And when you consider the timeline, they are both about equal.


If the GPUs of 2016 are 2.25-2.5x faster than a Titan (released in early 2013), I will be very impressed indeed.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Your AMD path is 4 years.

How did you come up with that calculation?

August 16, 2009 I bought an HD4890
June 21, 2012 I bought HD7970

June 21, 2012 - August 16, 2009 = 1040 days (or 2.85 years)

And if we use the the 6970 instead that fits your 3 years number. We are suddenly down to 2.28x.

Against your math is wrong. HD6970 came out Dec 15, 2010. If someone bought it exactly around launch, and then HD7970 right around launch Jan 9, 2012 retail, then it's ~ 2 years. HD7970 OC is 75% faster than HD6970 OC. The point of my post is to relate performance leaps vs. timeframe from both NV and AMD and use them as references to 880 vs. 7970. However, you keep changing my post to AMD vs. NV generational leaps. That has nothing to do with my post.

With nVidia if we use the GTX780ti its not 2x, but 2.48x. And when you consider the timeline, they are both about equal.

The point of my post isn't about comparing AMD vs. NV leaps relative to each other that you wanted to make out of my post. I realize that I used 780 as a comparison not 780Ti because I wanted to use 3 years as a point of reference. Obviously the move from 480 to 780TI is an even greater increase in performance but it doesn't make the comparison as valid since now you are stretching to 3.5 years from the time 480 came out. You can use NV or AMD or mix and match upgrades and pick almost any time in the last 3-4 years and use a 3-year timeframe and you'll end u with a performance increase anywhere from 2-3X vs. where we project 880 to land, which makes it a lot less impressive.
 
Last edited:

96Firebird

Diamond Member
Nov 8, 2010
5,740
337
126
So you're using the dates you bought the cards, not when they were first available?

How does this help prove any points?

Besides, you didn't even use the example from your post that he quoted. All his numbers are correct, if you use what he quoted instead of your own 4890 to 7970 example that only pertains to you.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
That TechReport article - remember that FLOPs does not necessarily translate into real world performance, it can only be compared between architectures.

But what it implies is:
- 20nm TSMC is not as predicted happening at high power
- We don't know when 16nm FinFETs will come
- The big gain in the next year will be Maxwell and 2016 will have a smaller gain




If the GPUs of 2016 are 2.25-2.5x faster than a Titan (released in early 2013), I will be very impressed indeed.

Its a myth that 20nm cant do highpower. The real reason is transistor cost. And 16FF will only cost more. Unlike previous, there is no cheaper transistors for anyone sofar but Intel when going below 28nm.

This is what AMD and nVidia have to deal with for GPUs:
IBS-2.jpg


Even if you shrink say a Maxwell GPU to 16FF in end of 2017. It will still cost more than it does today. And by that time, it will cost 60% more than the 28nm edition. Something that havent happend before.

Or perhaps better expressed by Samsung:
11635d1406145622-sfdsoi2.jpg
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Do you even know what you wrote yourself?



Thats not 3 years. Thats 4 years and 3 months.

From HD5870 to GTX480 was 6 months. So the time difference is only 6 months.

So we have 4 years 3 months for AMD products and 3X times performance,

and 3 years 9 months for NVIDIA products and 2.48X times performance with GTX-780Ti.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
. Unlike previous, there is no cheaper transistors for anyone sofar but Intel when going below 28nm.

Intel calculate transistor/cost way different than the others. You have no idea how much Intel's 14nm process cost today.
 

CrazyElf

Member
May 28, 2013
88
21
81
The point of my post isn't about comparing AMD vs. NV leaps relative to each other that you wanted to make out of my post. I realize that I used 780 as a comparison not 780Ti because I wanted to use 3 years as a point of reference. Obviously the move from 480 to 780TI is an even greater increase in performance but it doesn't make the comparison as valid since now you are stretching to 3.5 years from the time 480 came out. You can use NV or AMD or mix and match upgrades and pick almost any time in the last 3-4 years and use a 3-year timeframe and you'll end u with a performance increase anywhere from 2-3X vs. where we project 880 to land, which makes it a lot less impressive.


@Russian Sensation

What's your best guess as to what comes out by 2016?

- Maxwell comes out Q4 this year
- Perhaps a big Maxwell later in 2015?

- Any word on what AMD is doing? HBM is the only thing I have heard.

Let's use the GTX Titan as a point of reference, so early 2013. How much faster are we looking at compared to a Titan? Assume a 550mm^2 big die. So what are we expecting by early 2016?


Its a myth that 20nm cant do highpower. The real reason is transistor cost. And 16FF will only cost more. Unlike previous, there is no cheaper transistors for anyone sofar but Intel when going below 28nm.

I did not say "can't do". I said it was only 10% faster at 20nm than 28nm. That and Moore's Law has essentially reversed as your slides show. Somewhere between 20 and 28nm is rock bottom for price per transistor - perhaps at 28nm as the Samsung slide implies. STMicro claims that FD SOI could give it more life at 20nm. I'm skeptical though.

As I said earlier, after that they're going to have to pull on some insane stuff beyond 14nm, like quadruple patterning or something along those lines, especially because EUV does not look like it's happening any time soon.

Equally important, the gains for die shrinks seem to be smaller on high power processes than low power ones.


Intel calculate transistor/cost way different than the others. You have no idea how much Intel's 14nm process cost today.

Judging by the delays and other issues, my guess is they are having serious problems with heat, leakage, and probably die yields.

Do you know how much Intel's 14nm is costing? Your post implies you have some information that we don't.
 
Last edited:

mla

Junior Member
Jul 8, 2014
16
0
0
Its a myth that 20nm cant do highpower. The real reason is transistor cost. And 16FF will only cost more. Unlike previous, there is no cheaper transistors for anyone sofar but Intel when going below 28nm.

This is what AMD and nVidia have to deal with for GPUs:
IBS-2.jpg


Even if you shrink say a Maxwell GPU to 16FF in end of 2017. It will still cost more than it does today. And by that time, it will cost 60% more than the 28nm edition. Something that havent happend before.

Or perhaps better expressed by Samsung:
11635d1406145622-sfdsoi2.jpg

Source chart give.TANK
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Intel calculate transistor/cost way different than the others. You have no idea how much Intel's 14nm process cost today.

Please document this. Because all the published information contradicts your statement.
 

Mand

Senior member
Jan 13, 2014
664
0
0
The performance/watt is the giveaway. Unless Pascal is on a lower node and relatively inefficient.

I don't follow. What part about a 50% FLOPS/W increase over GM200 makes you think it's 28nm?

Under what realm of possibility is it reasonable to think that the next two years won't see a node shrink? TSMC's delay was a whopping one month...
 

Fastx

Senior member
Dec 18, 2008
780
0
0
Just a small bit of info I saw fwiw at VC about Big Maxwell GM200 yet this year.
Date 8-12-2014

A very interesting scientific paper has been discovered by 3DCenter.
Paper titled “Suitability of NVIDIA GPUs for SKA1-Low” has revealed some NVIDIA’s plans for the upcoming Maxwell and Pascal architecture. This document would probably mean nothing, if one of the authors wasn’t working for NVIDIA itself. Mike Clark, who is Compute DevTech Engineer at NVIDIA, has contributed to this research, so it would be rather shocking if the data he provided would be bogus.


Maxwell GM200

The most interesting part is where GM200 pops up. According to this document, the Big Maxwell would arrive yet in this year. This GPU would be twice as power efficient as GK110. Of course this doesn’t mean it’s twice as fast in games.
http://videocardz.com/51195/nvidia-maxwell-gm200-pascal-gp100-confirmed-research-paper
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
I don't follow. What part about a 50% FLOPS/W increase over GM200 makes you think it's 28nm?

Under what realm of possibility is it reasonable to think that the next two years won't see a node shrink? TSMC's delay was a whopping one month...

Its 25 vs 35. Thats a 40% increase. Or close to what a direct shrink would give. But I assume that Pascal is an improved uarch and not just a Maxwell shrink. Not to mention nVidia also publicly said that there is no savings per transistor below 28nm.

While nothing as such besides money prevents them from shrinking it. Its simply cheaper to have a 28nm design with more transistors than a 20 or 16nm design. Unlike what its been in the history.
 
Last edited:

Mand

Senior member
Jan 13, 2014
664
0
0
Its 25 vs 35. Thats a 40% increase. Or close to what a direct shrink would give. But I assume that Pascal is an improved uarch and not just a Maxwell shrink. Not to mention nVidia also publicly said that there is no savings per transistor below 28nm.

While nothing as such besides money prevents them from shrinking it. Its simply cheaper to have a 28nm design with more transistors than a 20 or 16nm design. Unlike what its been in the history.

And what if GM200 is 20nm?
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
GM200 could easily be a smaller 20 nm design cutting off around ~100 mm^2 (500ish mm^2 --> 400ish mm^2).
 

FatherMurphy

Senior member
Mar 27, 2014
229
18
81
Except there is no evidence suggesting that Nvidia would forsake its now long-established pattern of making its big chip in the 515-550 mm^2 range. A GM200 400ish mm^2 chip would be leaving too much on the table.