Poll: Pascal performance increase over Titan X

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Pascal performance increase over Titan X

  • 10x

  • 5x

  • 3x

  • 2.5x

  • 2x

  • 1.75x

  • 1.5x

  • 1.25x

  • other


Results are only viewable after voting.

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
It is not gimped. It doesnt support DP in the same way. Seriously, stop making these false claims.

Facebook is using Maxwell for their deep learning network:
Big Sur was built with the NVIDIA Tesla M40 in mind[...]
https://code.facebook.com/posts/1687861518126048/facebook-to-open-source-ai-hardware-design/

Tesla M40: http://www.nvidia.com/object/tesla-m40.html

So, how is Maxwell "compromised" when it is just better at other workload than Kepler?
Not supporting DP is not gimping. It is just a sacrifice because of 28nm. AMD did the same. Something you dont see as a problem.

BTW: A GTX980 is better with DP than a GTX680:
67748.png

http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/20

"Gimping", sure.
 

jpiniero

Lifer
Oct 1, 2010
16,493
6,987
136
The 680 is also gimped (it's 1/24 DP). Look at the 580 (1/8 DP), it's a good deal faster than the 980.

IIRC Deep learning uses FP16.
 

nvgpu

Senior member
Sep 12, 2014
629
202
81
Don't bother arguing with these ADFs FUDers and slanderers, Nvidia replaced all their Kepler Teslas with Maxwell Teslas for all other workloads except for FP64 customers which can use GK210 Tesla K80.

http://www.anandtech.com/show/9574/nvidia-announces-grid-20-tesla-m60-m6-grid-cards

http://www.anandtech.com/show/9776/...-m4-server-cards-data-center-machine-learning

Some people just choose to ignore the FACTs as usual. Hawaii is gimped 1/8 FP64 unless you buy the workstation card & Fury line is totally gimped in FP64 being 1/16 but conveniently they ignore this FACT. No consumer workloads use FP64 period, it's pretty much IRRELEVANT except for academic purposes. AMD can't even sell Fiji as a workstation graphics card for revenue, try shipping a 4GB workstation flagship card and you'll be laughed at.

http://images.anandtech.com/graphs/graph9390/75495.png

"but even the Radeon HD 7970 is beating the R9 Fury X here."

https://www.phoronix.com/scan.php?page=news_item&px=NIISI-Open-AMD-Results

yfNixrw.jpg


If anyone has to use AMD for compute, I feel sorry for them, NOT.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
The 680 is also gimped (it's 1/24 DP). Look at the 580 (1/8 DP), it's a good deal faster than the 980.

GTX980 and GTX680 have both nearly the same DP performance:
140 GFLOPS vs. 120GFLOPs.

The Maxwell architecture is better suited for compute workload. There is nothing gimped.

IIRC Deep learning uses FP16.

Right now only on Tegra X1. Maxwell doesnt support FP16.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
I surely expect a 100% increase from GP100 vs GM200 and even better in DX12 games. But I don't expect Nvidia to launch a GP100 in 2016. I expect Nvidia to follow the Maxwell model. Launch a small GP107 (1024 cc) first with GDDR5 in Q3 2016. Then get GP104 (4096 cc) out with GDDR5X in Q4 2016 followed by a GP106 (2048 cc) in Q1 2017 and finally GP100 with HBM2 (6144 cc) in Q2 2017. Nvidia has mastered the art of GPU releases after Fermi by minimizing risks and maximizing profits. :thumbsup:
 
Mar 10, 2006
11,715
2,012
126
I surely expect a 100% increase from GP100 vs GM200 and even better in DX12 games. But I don't expect Nvidia to launch a GP100 in 2016. I expect Nvidia to follow the Maxwell model. Launch a small GP107 (1024 cc) first with GDDR5 in Q3 2016. Then get GP104 (4096 cc) out with GDDR5X in Q4 2016 followed by a GP106 (2048 cc) in Q1 2017 and finally GP100 with HBM2 (6144 cc) in Q2 2017. Nvidia has mastered the art of GPU releases after Fermi by minimizing risks and maximizing profits. :thumbsup:

I think you will see GP100 this year.
 
Feb 19, 2009
10,457
10
76
@sontin
You know full well that NV targets its mid-range Teslas for SP compute, and the big-chip Tesla as a DP compute powerhouse.

If you don't think GM200 is gimped compared to GK110, you should pull up the DP compute output charts of these parts. Titan smashes Titan X. That is why there's no GM200 Teslas.

This is why there's customers waiting for GM200 Pascal Teslas, because it won't be gimped at DP compute.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
I think you will see GP100 this year.

So you are telling TSMC is capable of decent yields on a 500 sq mm chip when the largest chip they have been producing in high volume is 147 sq mm (A9X). Come on. You have to walk before you try to run. I would say Nvidia is smart enough not to attempt a Fermi again on a bleeding edge node.

I am going with Maxwell type launch. GP107->GP104->GP106-> GP100. I don't see Nvidia changing that. They have Maxwell Teslas for fp32 customers and Kepler Teslas for fp64 customers. I don't think Intel is going to overcome the CUDA ecosystem overnight. Its the software and hardware synergy which is keeping Nvidia at the top of HPC market. Incidentally thats the same synergy (software + hardware) keeping Apple at the top of the mobile market.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Nvidia desperately needs a new HPC chip to stand up to Knights Landing from Intel. They're going to rush through GP100 no matter how low yields are, since they will still be making a profit through custom supercomputer contracts and $4,000+ Tesla cards. But I don't think we will see GP100 this year. I expect the GP100 Titan to come in Q1 2017; this would fit in quite well with the first Titan in Q1 2013 and the Titan X in Q1 2015.

I expect that Nvidia's consumer flagship this year is going to be GP104, and it will probably come in October. My prediction is that Nvidia plans to basically double the resources on each chip compared to the prior generation, except for the memory bus, which can keep the same width but use GDDR5X to make up the bandwidth and then some. So the GP104 would have 4096 shaders, 128 ROPs, and a 256-bit GDDR5X bus (which would have about 20%-25% more memory bandwidth than the 384-bit bus on GM200). The GP106 would have 2048 shaders, 64 ROPs (so basically the same resources as GM204) and would use a 128-bit GDDR5X bus to meet the bandwidth demands with a manageable number of PCB traces. GP107 would have 1280 shaders, 32 ROPs, and either a 64-bit GDDR5X bus or a standard 128-bit GDDR5 bus if the former proves too expensive.

Even if we assume that architecturally it won't be that different from Maxwell, the extra resources and higher clock speeds from the FinFET die shrink should make GP104 quite a beast. 4096 shaders and 128 ROPs would be 33% more raw power than GM200, and a clock speed increase of 30% on top of that seems quite realistic given what we have seen from the A8->A9 transition (planar->FinFET). We could easily be looking at more than 50% performance boosts above and beyond a modestly overclocked GTX 980 Ti. At that performance level, Nvidia could charge $699 for the card and many gamers would consider it a bargain. With a die size of ~350mm^2, that's some serious profits.
 

Head1985

Golden Member
Jul 8, 2014
1,867
699
136
Gp 104 wont be 2x gm204.It will never be 400mm2 GPU.It will be around 300-320mm2 max.
1080 around 3500SP 8GB HBM2 40% above TITANX 500-550USD q3 2016
1070 around 3000sp 8GB HBM2 or GDDR5x 20-25% above TITANX 400USD q3 2016

GP100-around 6000SP 16GB HBM2 around 100% above TITANX 1000USD q1 2017 as TITAN but will be released in tesla cards first even before gp 104.
 
Last edited: