NVIDIA GeForce GTX 1080 Ti to be nearly 2x faster than the GTX 980 Ti

MentalIlness

Platinum Member
Nov 22, 2009
2,383
11
76
http://www.tweaktown.com/news/51141/nvidia-geforce-gtx-1080-ti-nearly-2x-faster-980/index.html


NVIDIA GeForce GTX 1080 Ti to be nearly 2x faster than the GTX 980 Ti


As we get closer to NVIDIA's GPU Technology Conference in early April, we're finding out more details on the next-gen Pascal architecture, and what cards will purportedly arrive under the new 16nm process. Now remember, these are just leaked specs on the purported cards - the specs could change, and so could the naming system NVIDIA uses on the next-gen cards.

51141_10_nvidia-geforce-gtx-1080-ti-nearly-2x-faster-980.jpg

According to the latest rumors, NVIDIA will launch the new cards under what we'll call them for now (I seriously don't think they'll be called this): GeForce GTX 1080, GTX 1080 Ti and the new Titan X successor. Starting with the GTX 1080, which will feature the GP104 core, we'll see 4096 CUDA cores (a 100% increase over the 2048 CUDA cores on the GM204-based GTX 980).

We are to expect a near doubling in texture units, ROPs, memory bandwidth and 6GB of GDDR5 (up from 4GB on the GTX 980). The GeForce GTX 1080 Ti is even more powerful, with 5120 CUDA cores, 320 texture units, and 160ROPS - with another 28% in TFlops performance. The GTX 1080 Ti will also reportedly rock 8GB of GDDR5 (I think we'll see GDDR5X) and a 512-bit memory bus.

Now, the Titan X successor... boy is this card going to be fun. We scale up to a huge 6144 CUDA cores (a 100% increase on the 3072 CUDA cores found in the Titan X), but a big bump to 384 texture units and 192 ROPS. It will reportedly rock a huge 12.5TFlops, up from the 6.1TFlops on the Titan X - and even bumping heads with AMD's dual GPU card, the Radeon Pro Duo and its 16Tflops of performance.

The chart notes that the Titan X successor will have 16GB of HBM2 on a 4096-bit memory bus with 1024GB/sec (1TB/sec) of memory bandwidth - all contained in a more-than-impressive 225W TDP. Wow.

The interesting thing to note here, is that the new cards have amazing power efficiency thanks to the 16nm process, with the GTX 1080 only requiring 175W and the GTX 1080 Ti with 225W. Not too damn bad at all. This means we're going to see ridiculously fast cards, that will also run cool and quiet.

We will know more in a couple of weeks time at NVIDIA's GTC event in early April in San Jose, and we will be there delivering the news to you in person.
 

Shehriazad

Senior member
Nov 3, 2014
555
2
46
Uh....Standard GDDR5 and HBM²?


Seems untrustworthy now...if even on AMDs side it's looking like they will only use HBM gen1 for now.

Also no GDDR5X? Or is this higher clocked GDDR5 supposed to be X? Because if the card is supposedly twice as fast...that bandwidth isn't gonna cut it...now is it?
 

coolpurplefan

Golden Member
Mar 2, 2006
1,243
0
0
This may be premature but does anyone know if any of the new Pascal series will feature a standard PCIe connector?
 

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
This may be premature but does anyone know if any of the new Pascal series will feature a standard PCIe connector?

It's almost inconceivable that it won't.


I have no idea why Tweaktown is taking this chart at anything resembling face value.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
Fake.
1. TFLops are never quoted using 3-4 decimal points by either AMD or NVIDIA.
2. That many ROps would chew through memory bandwidth and L2 cache.
3. That many texture units would require tons of L1 cache and memory bandwidth.
4. 96 ROps are memory bandwidth constrained on Maxwell so why boost to 128 ROps when you're pairing it with 384GB/s GDDR5??
5. Two versions of GP100??? No.
6. GP100 being double GM200 with 225W TDP??? Where did the FP64 units go then? Remember, GP100 has a 3:1 ratio of FP32 to FP64 units. Those will take a considerable amount of die space and push power usage up.

Conclusion:
Absolutely fake.
 
Last edited:

iiiankiii

Senior member
Apr 4, 2008
759
47
91
totally fake. If it had the word "CLASSIFIED" stamped in red bold letters, it would totally be legit. The word "Confidential", not so much.
 

Raising

Member
Mar 12, 2016
120
0
16
Why would nvidia release anything twice as fast when they can milk the new tech as much as possible ? Expect at most 30%~40% extra performance with a slighty less power consumption..
 

Innokentij

Senior member
Jan 14, 2014
237
7
81
Why would nvidia release anything twice as fast when they can milk the new tech as much as possible ? Expect at most 30%~40% extra performance with a slighty less power consumption..

Let's see, competition from AMD, 144hz screens, 4k resolution is what i can think off atm. Not to meantion u cant even run most games maxed on 980TI atm, well u could if there was some codemanship left in the gaming world but that's long gone.
 

poohbear

Platinum Member
Mar 11, 2003
2,284
5
81
So the newest pascals (1st generation) won't be using HBM2 RAM? Just the same old DDR5?
 

moonbogg

Lifer
Jan 8, 2011
10,734
3,454
136
So the newest pascals (1st generation) won't be using HBM2 RAM? Just the same old DDR5?

Those are actually the second wave GPU's, you know, the mid range chips that come after the flag ship and cost half the price. Except this time they are tricking you by releasing them FIRST, calling them high end and charging you full price. Its a trap. Don't do it.
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
Owning a GTX980TI, I can't imagine a 1080TI being twice as fast as a 980TI!
I don't think so.

I read the article in the link and the author took some liberties in his analysis and quite frankly claimed the 1080TI would be twice as fast as a 980TI when I believe he meant twice as fast as a 980.
 
Last edited:

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Owning a GTX980TI, I can't imagine a 1080TI being twice as fast as a 980TI!
I don't think so.


http://www.anandtech.com/bench/product/1444?vs=1496

That is how much faster big 28nm chip ended up versus big chip on 40nm. Can't the same happen with 16nm? Maybe not the first iteration with chips, maybe Volta will further improvements, but it(the double+ improvement) will eventually happen.

The truth is that 16nm gives more transistors to play and the performance of those transistors is better. Apple gained a ~40% clock while moving from 28nm A7 to 16nm A9. 40% more clock, combined with more resources and unleashed from memory bottlenecks with HBM2/GDDR5x?


It seems 28nm has been for so long that people forgot how important node shrinks were in the past :)
 

hawtdawg

Golden Member
Jun 4, 2005
1,223
7
81
Owning a GTX980TI, I can't imagine a 1080TI being twice as fast as a 980TI!
I don't think so.

I read the article in the link and the author took some liberties in his analysis and quite frankly claimed the 1080TI would be twice as fast as a 980TI when I believe he meant twice as fast as a 980.


A 980ti is like 3x faster than a 580, with only a half node shrink. This is a full node shrink. If a full GP100 isn't at least twice as fast as a 980ti, I'm going to be disappointed.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Full GP100 (which sort of launched) seemed like it wouldn't quite be double, but that has all sorts of non gaming related stuff in it :)

A full scale gaming chip, if it happens, should manage it. You'd imagine with HBM2 which is a big jump vs the 980ti too.
 

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
A 980ti is like 3x faster than a 580, with only a half node shrink. This is a full node shrink. If a full GP100 isn't at least twice as fast as a 980ti, I'm going to be disappointed.

40nm to 28nm was a full node shrink.
 
Mar 10, 2006
11,715
2,012
126
40nm to 28nm was a full node shrink.

40nm -> 28nm was huge because not only did it bring a full shrink but it saw a move from poly-silicon gates to HKMG, which helped to enhance xtor performance. The move to FinFETs is similarly awesome.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
A 980ti is like 3x faster than a 580, with only a half node shrink. This is a full node shrink. If a full GP100 isn't at least twice as fast as a 980ti, I'm going to be disappointed.

GM200 is a gaming optimized chip with gutted FP64. P100 is a FP64 (double precision) compute monster with 4 NVLinks . A lot of transistors are needed to get that 5.3 TFLOPS FP64 performance. Remember Nvidia has dedicated FP64 hardware and those do nothing for gaming performance. Similarly for NVLink which is useful for connecting multiple P100 GPUs to each other and to the CPU (in the case of Power 8) . These NVLinks provide high speed interconnects which improve performance in supercomputers and HPC workstations but do nothing for gaming performance. Kepler GK110 had 1/3 FP64 perf while GP100 has 1/2 FP64 perf. P100 has more than 3x the FP64 perf of GK110.

The rumours are that GP104 with 2560 cuda cores and probably 1.6 Ghz clocks is 30-35% faster than GTX 980 Ti/Titan X. I would say a 3840 cuda core P100 with 1.6 Ghz clocks could end up 75-80% faster than GTX 980 Ti. What remains to be seen is the overclocking potential of the full P100.
 
Mar 10, 2006
11,715
2,012
126
GM200 is a gaming optimized chip with gutted FP64. P100 is a FP64 (double precision) compute monster with 4 NVLinks . A lot of transistors are needed to get that 5.3 TFLOPS FP64 performance. Remember Nvidia has dedicated FP64 hardware and those do nothing for gaming performance. Similarly for NVLink which is useful for connecting multiple P100 GPUs to each other and to the CPU (in the case of Power 8) . These NVLinks provide high speed interconnects which improve performance in supercomputers and HPC workstations but do nothing for gaming performance. Kepler GK110 had 1/3 FP64 perf while GP100 has 1/2 FP64 perf. P100 has more than 3x the FP64 perf of GK110.

The rumours are that GP104 with 2560 cuda cores and probably 1.6 Ghz clocks is 30-35% faster than GTX 980 Ti/Titan X. I would say a 3840 cuda core P100 with 1.6 Ghz clocks could end up 75-80% faster than GTX 980 Ti. What remains to be seen is the overclocking potential of the full P100.

Full P100 is unlikely to ever come to gamers. At NVIDIA's analyst day they were quite clear that this is a part focused on HPC and not targeted at gaming. They even spent a whole huge presentation talking about how their scale allows them to build the right products for the right segments.
 

alcoholbob

Diamond Member
May 24, 2005
6,390
469
126
True, but they can easily sell us a cut down variant for say 3200 cores at 1.5GHz w/ HBM2. People would still buy that even if its only 30% faster.
 

moonbogg

Lifer
Jan 8, 2011
10,734
3,454
136
Full P100 is unlikely to ever come to gamers. At NVIDIA's analyst day they were quite clear that this is a part focused on HPC and not targeted at gaming. They even spent a whole huge presentation talking about how their scale allows them to build the right products for the right segments.

I remember hearing the same kind of talk before GTX 680's release. People assumed the big chip was for compute and gamers wouldn't be able to buy them. That's the message that was supposed to go out so that people would be TRICKED and think the 680 was the real deal for gaming. They were wrong, clearly.
Maybe you are saying something different, perhaps that a big die chip will come for gaming but it won't be the same as the full blown chip. In any case, the cards that come out now are not the high end cards. I'd bet my damn house on that.
 
Mar 10, 2006
11,715
2,012
126
I remember hearing the same kind of talk before GTX 680's release. People assumed the big chip was for compute and gamers wouldn't be able to buy them. That's the message that was supposed to go out so that people would be TRICKED and think the 680 was the real deal for gaming. They were wrong, clearly.
Maybe you are saying something different, perhaps that a big die chip will come for gaming but it won't be the same as the full blown chip. In any case, the cards that come out now are not the high end cards. I'd bet my damn house on that.

I am saying a big die chip will come for gaming, but it won't be GP100. It will be a gaming-targeted part, IMO. GM200 was a huge die, targeted 100% at gaming. I think we'll see something similar on 16FF+.

The mythical GP102 sounds like a good candidate :)

GK110 only made it out to gamers in the form of Titan X/GTX 780 (TI) because NV needed a response to Hawaii and Maxwell (GM204) wasn't ready yet.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
I am saying a big die chip will come for gaming, but it won't be GP100. It will be a gaming-targeted part, IMO. GM200 was a huge die, targeted 100% at gaming. I think we'll see something similar on 16FF+.

The mythical GP102 sounds like a good candidate :)

GK110 only made it out to gamers in the form of Titan X/GTX 780 (TI) because NV needed a response to Hawaii and Maxwell (GM204) wasn't ready yet.

Which means they'll release GP100 for gamers because NV will need a response to Vega and Volta won't be ready yet ... :)
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
GK110 only made it out to gamers in the form of Titan X/GTX 780 (TI) because NV needed a response to Hawaii and Maxwell (GM204) wasn't ready yet.

I'm not sure, the Titan/Ti progression seems to be a really good way to drive sales of big chips in the consumer space. I think what it would take to keep GP100 out of consumers is a parallel large chip for gaming that is as powerful or more so and is cheaper to make for the gaming performance.