[PCgamer] Pascal to be 10x faster than Maxwell!

poohbear

Platinum Member
Mar 11, 2003
2,284
5
81
Pascal is shaping up to be a beast! 3 years in development and Nvidia's CEO saying they expect it to be 10x more powerful than Maxwell, so here's to hoping 4k gaming becoming a reality end of next year! Link below for video and summary:

http://www.pcgamer.com/nvidias-pascal-is-10x-as-powerful-as-the-titan-x/

however, they're saying u'd need a new mobo & even new CPU to support their NVLink interface which would replace PCI-Express & really let Pascal fly, so not sure if i like that route...just bought my current build last summer and was hoping CPU & mobo would stay with me for 3 years.:p If its the GPU revolution they're claiming it is, then might finally get excited about desktop scene again.

Please remember to include source in title. Fixed for you. -Shmee
 
Last edited by a moderator:

alcoholbob

Diamond Member
May 24, 2005
6,271
323
126
JHH said 3x more FP performance. Take out HBM and 16FF process node and maybe you have half that much increase in real efficiency. Not to mention he might be quoting double precision numbers to inflate the difference over maxwell even more.
 

S.H.O.D.A.N.

Senior member
Mar 22, 2014
205
0
41
I have to admit, the sensationalist (and senseless) title gave me a chuckle. I'm referring to the PCGamer article, just so that we're clear here.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,949
504
126
10x claim is right up there with some of Sony's claims about the Playstation(s). No it will not be 10x faster except for maybe a cherry picked, very narrow usage scenario.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
lol, it is not 10x faster for gaming. This reminds me of back when Apple used to say how much faster the G5 was than Intels of the time by quoting very specific photoshop tests.

Sure Pascal will be faster, thats a given. But saying 10x across the board is laughable.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
Okay, somebody needs to explain this NVLink thing to me, because if it is what I think it is I'm officially never buying an Nvidia product.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
10x claim is right up there with some of Sony's claims about the Playstation(s). No it will not be 10x faster except for maybe a cherry picked, very narrow usage scenario.

NV claimed days ago that Titan X delivers 2x the perf/watt of the original Titan and 2x the performance. Both of those claims are 100% false when it comes to gaming.

The 1st claim:


35% better perf/watt, not 100%!

perfwatt_2560.gif


The 2nd claim:

61% faster, not 100%

perfrel_2560.gif


Things look worse if we compared Titan to 780Ti as the performance grew about 45% (780Ti doubled the performance of a 580, but of course it had a node shrink). Still, NV's performance claims marketing is more like Apple's.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
It will be a slick card...
If you throw both from Burj Khalifa Pascal will be actually 10x faster thanks to its improved aerodynamics. <- Marketing at its best!
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Well, as long as this never makes it to the consumer market I guess it's okay, but I still hope that an open standard takes its place soon.

NV and open standards? You are kidding, right! :awe::D

"NVIDIA® NVLink™ is a high-bandwidth, energy-efficient interconnect that enables ultra-fast communication between the CPU and GPU, and between GPUs. The technology allows data sharing at rates 5 to 12 times faster than the traditional PCIe Gen3 interconnect"

Sounds to me like a chip on the motherboard proprietary to NV that will allow NV to charge extra $ from the motherboard makers and it absolutely wouldn't work with AMD cards. If it was just a next generation SLI link cable then I don't see how it would help with CPU to GPU communication, which suggests to me a real hardware interconnect chip soldered onto the motherboard.
 

cbrunny

Diamond Member
Oct 12, 2007
6,791
406
126
Toy Story 3 already happened.

I know very little about this, but NVLink seems very not-for-gamers-at-all-ever. PCI 4.0/n.0 may be the same as this if NV had a soul but that also seems unlikely.
 

poohbear

Platinum Member
Mar 11, 2003
2,284
5
81
NV claimed days ago that Titan X delivers 2x the perf/watt of the original Titan and 2x the performance. Both of those claims are 100% false when it comes to gaming.

The 1st claim:


35% better perf/watt, not 100%!

perfwatt_2560.gif


The 2nd claim:

61% faster, not 100%

perfrel_2560.gif


Things look worse if we compared Titan to 780Ti as the performance grew about 45% (780Ti doubled the performance of a 580, but of course it had a node shrink). Still, NV's performance claims marketing is more like Apple's.

Ok so if theyre exaggerating @ 10x more powerful, can we atleast hope 3-4x more powerful??:)
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
NV claimed days ago that Titan X delivers 2x the perf/watt of the original Titan and 2x the performance. Both of those claims are 100% false when it comes to gaming.

I've said several times on this forum since reviews hit that GM200 is off somehow. Either it's 12gb of 7ghz vram is eating too much into it's TDP, or the chip needs another spin, or perhaps 600mm2 is simply too big to nail everything down perfectly. Both GF110 and GK110 were as efficient as their lesser chips (if not slightly more so), but GM200 loses a noticeable amount of efficiency over GM204. GM200 is disappointing in both stock performance and perf/w, from an expectation and precedence standpoint.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Ok so if theyre exaggerating @ 10x more powerful, can we atleast hope 3-4x more powerful??:)

Best case scenario for NV is a new architecture + a node shrink. Last time they did this was GTX580 -> 780Ti and we got 2X the performance increase but that was accomplished in a period of 3 years (Nov 2011 GTX580 --> Nov 2013 GTX780Ti), not 2 years. Since Titan X (GM200) launched March 17, 2015, it should take roughly 3 years for NV to double it in performance. If we look at AMD, it's similar. HD7970 was announced Dec 2011 and launched Jan 2012. R9 390X should be 2-2.25x faster around June 2015 or roughly 3.5 years from launch.

The average GPU growth has been on pace at 33-35% per year and has been that way since HD5870 (Sept 2009).

Sept 2009 HD5870 (100%) --> Nov 2013 GTX780Ti (329%): growth rate of 1.3475X (or 34.75% per annum)
http://www.computerbase.de/2013-12/grafikkarten-2013-vergleich/10/

As you can tell, this pace roughly continues since R9 290X/780Ti launched in November 2013 with the Titan X. Titan X launched roughly 1.5 years since R9 290X/780Ti and its performance is about 45-53% faster. (1.3475X * (half rate to account for half the year or 1.17375) = 1.58X. We are actually slightly behind pace right now but R9 390X/consumer GM200 haven't launched so we don't have the full picture for 1H of 2015 yet:
http://www.computerbase.de/2015-03/nvidia-geforce-gtx-titan-x-im-test/3/

There is no indication that NV will break the laws of physics. If anything, GPUs used to double in performance every 18-24 months but now it's taking AMD/NV 30-36 months or so.

A rough estimate then is Titan X's performance should increase 2.45X (1.3475^3) around March 2018. I am just using historical data and it's not entirely fair since this time might be a bit earlier due to HBM2 + 14nm. Since Pascal is on the road-map for 2016, I think 3-4x faster is just a pipe dream for Pascal, but Volta is a different story. Also, it's not an exact science. For example, we can't just expect some random GPU to offer 34.75% more performance on March 2016, but it's a rough estimate that by 2H of 2016 we should have at least 1 product like that compared to the Titan X.

I've said several times on this forum since reviews hit that GM200 is off somehow. Either it's 12gb of 7ghz vram is eating too much into it's TDP, or the chip needs another spin, or perhaps 600mm2 is simply too big to nail everything down perfectly. Both GF110 and GK110 were as efficient as their lesser chips (if not slightly more so), but GM200 loses a noticeable amount of efficiency over GM204. GM200 is disappointing in both stock performance and perf/w, from an expectation and precedence standpoint.

I am more inclined to believe that Titan X is not the peak version of GM200 product. As you have alluded, wait another 4-6 months to bin GM200/respin it, drop 12GB for 6GB, allow AIBs to provide open air cooled solutions, and it's possible to have a GM200 with boost clock of 1216-1241mhz out of the box.

EVGA already has 2 such cards:
EVGA Titan X Superclocked = 1216mhz Boost
EVGA Titan X HydroCopper = 1241mhz Boost
http://hexus.net/tech/news/graphics/81787-nvidia-geforce-gtx-titan-x-partner-cards-roundup/

That's about 15% higher clocks than the reference Titan X. With these characteristics, GM200 would definitely live up to the hype/expectations! I have to acknowledge what NV engineers pulled off though. Titan X doubles R9 280X in performance within a similar power usage on the same node. That is remarkable even though it slightly trails GTX980 in performance/watt. Granted, Tahiti uses early gen 28nm node that's far from optimized and it's a DP monster while the Titan X is a pure gaming card with almost non-existent DP capability, while R9 280X has 1.075Tflops of DP.

If NV priced the Titan X at $650 or so and provided after-market open air options for AIBs, I don't see why HD7970/7970Ghz owners would have a reason to wait for R9 390X. Alas, NV didn't do that.....
 
Last edited:

ultimatebob

Lifer
Jul 1, 2001
25,135
2,445
126
Seriously, when was the last time you actually saw piece of consumer electronics that was actually 10x faster in most benchmarks than the prior version? It never happens, whether it be CPU, GPU, memory, or storage.

(Networking tech gets a pass on this rant, since the standards groups basically forced the evolution of their products from 10mbps to 100mbps to 1000mbps)

I'd expect a 2x improvement at best, and even that will be at some insanely high resolution that will still be unplayable. At 4K, I'd expect about a 50% improvement.
 

garagisti

Senior member
Aug 7, 2007
592
7
81
Toy Story 3 already happened.

I know very little about this, but NVLink seems very not-for-gamers-at-all-ever. PCI 4.0/n.0 may be the same as this if NV had a soul but that also seems unlikely.

You never know what may be going on in the background. You may want to read pcie 4 specification to see, if Nvidia had a "difference of opinion" into where it must progress as a standard. Just like Nvidia did with the Adaptive Sync.

Nvidia has significant marketshare in discrete, so i wouldn't be surprised, if they tried to push some proprietary chippery, as they used to before for enabling SLI. More money is more money. I don't even blame Nvidia for state of affairs, but rather their customers, who're simply not sending a strong enough message to cut this crap. If you keep buying from an abusive seller, well, abuse should be expected.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,674
145
106
www.neftastic.com
nv and open standards? You are kidding, right! :awe::d

"nvidia® nvlink™ is a high-bandwidth, energy-efficient interconnect that enables ultra-fast communication between the cpu and gpu, and between gpus. The technology allows data sharing at rates 5 to 12 times faster than the traditional pcie gen3 interconnect"

sounds to me like a chip on the motherboard proprietary to nv that will allow nv to charge extra $ from the motherboard makers and it absolutely wouldn't work with amd cards. If it was just a next generation sli link cable then i don't see how it would help with cpu to gpu communication, which suggests to me a real hardware interconnect chip soldered onto the motherboard.

agp 2.0?