• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Poll: Pascal performance increase over Titan X

Pascal performance increase over Titan X

  • 10x

  • 5x

  • 3x

  • 2.5x

  • 2x

  • 1.75x

  • 1.5x

  • 1.25x

  • other


Results are only viewable after voting.

eRacer

Member
What do you think will be the typical PC gaming performance increase over Titan X of the best Pascal single-GPU gaming card?
 
Such a bloated question. It could be staggered release like Kepler and Maxwell, which sees a mid-range dies first and then the big dies some months later down the line.

Regardless, my expectations are all over the place. GDDR5X and HBM2 are going to alleviate some major bottlenecks for the respective products, but designing finfet transistors is supposedly much, much harder and currently way more expensive. I think a safe bet would be to look at Kepler's launch of it's various products vs. the chips it replaced. But given the big advancement in memory that is taking place along with the full node jump + new transistors, we could be coming into the biggest leap in graphics performance between generations that we've ever seen.
 
Such a bloated question. It could be staggered release like Kepler and Maxwell, which sees a mid-range dies first and then the big dies some months later down the line.
I meant it as a big die vs. big die comparison even if a 500+mm^2 Pascal doesn't launch this year...the best Pascal GPU model that will ever become available.
 
Last edited:
I meant it as a big die vs. big die comparison even if a 500+mm^2 Pascal doesn't launch this year...the best Pascal GPU model that will ever become available.

If Volta is later in 2018, then there could be 3 "generations" of Pascal stretching into 2018. Way too many unknowns to make any sort of educated guess. 100% on average would probably be best case scenario, some games more, some less. So I'm guessing 2x in your poll.
 
Everyone knows the answer is 10x.

I mean, new u-arch, that's automatically 1.5x gain. HBM2 should add another 2x gains on top of that. Then a node shrink, bam 10x!
 
Everyone knows the answer is 10x.

I mean, new u-arch, that's automatically 1.5x gain. HBM2 should add another 2x gains on top of that. Then a node shrink, bam 10x!
Darn. I knew should have added a 12x option for all of those who thought the 10x option was obvious AND believe the big Pascal reference design will feature CLC/water-cooling. :biggrin:
 
I meant it as a big die vs. big die comparison even if a 500+mm^2 Pascal doesn't launch this year...the best Pascal GPU model that will ever become available.

780 Ti essentially doubled 580 performance:

https://tpucdn.com/reviews/NVIDIA/GeForce_GTX_780_Ti/images/perfrel_2560.gif

I expect a 680 or a 980 rather than a 780 Ti or a Titan X this year. But big Pascal should be 100% faster, or more since it has the benefit of a new faster memory that Kepler did not.

I think those poll choices are silly. I'm sure the worst pessimists expect way more than 25%, but who would expect more than 3x, let alone 10x.
 
Last edited:
780 Ti essentially doubled 580 performance:

https://tpucdn.com/reviews/NVIDIA/GeForce_GTX_780_Ti/images/perfrel_2560.gif

I expect a 680 or a 980 rather than a 780 Ti or a Titan X this year. But big Pascal should be 100% faster, or more since it has the benefit of a new faster memory that Kepler did not.

I think those poll choices are silly. I'm sure the worst pessimists expect way more than 25%, but who would expect more than 3x, let alone 10x.

eRacer put those options there most likely to see how many people were "suckered in" by the "Pascal is 10x Maxwell" marketing hype from GTC '15, inspired by arguments he had with NVIDIA haters over on the S|A forums.

http://semiaccurate.com/forums/showthread.php?t=9032&page=11
 
Does anyone even have an idea of what densities and what die sizes are possible on the new node? I guess that is a $1000,000 question right there.

I fully expect Nvidia (and AMD) to pretty much gain about 70-100%.
 
I fully expect the mainstream x60 part to show some sort of decent progress this time, because 560Ti / 660 / 660Ti / 760 / 960 has gone nowhere fast. Just lower and lower power.
 
780 Ti essentially doubled 580 performance:

https://tpucdn.com/reviews/NVIDIA/GeForce_GTX_780_Ti/images/perfrel_2560.gif

I expect a 680 or a 980 rather than a 780 Ti or a Titan X this year. But big Pascal should be 100% faster, or more since it has the benefit of a new faster memory that Kepler did not.

I think those poll choices are silly. I'm sure the worst pessimists expect way more than 25%, but who would expect more than 3x, let alone 10x.

I'm hoping that the Pascal (Titan version) is 1.5x as fast as Titan X, using less power, running cooler, so under custom water it can be overclocked significantly. Maybe that's on the conservative side - and I don't consider myself pessimistic, but at this time information is just so scarce.
 
I voted 1.25 times merely because you are going from a matured architecture and drivers to a new one that will need to be tweaked.
 
And we are still waiting for someone to reply "10x because Jen-Hsun Huang told me so!" 😉

Except he never said or even implied that a single Pascal GPU would be 10x faster than a single Maxwell GPU in anything gaming related. What he actually said was that EIGHT Pascal GPU's could potentially be 10x faster than FOUR Maxwell GPU's in the very specific scenario of deep learning apps. Deep learning apps can make use of new features in Pascal (Mixed-precision computing) which have no use in gaming. NVidia has made no specific claims about gaming performance that I have seen.
 
Big Pascal vs Big Maxwell:

Probably 1.5x early on and eventually 2x+ due to maturation of drivers & games or features better suited to the new Pascal uarch.

The reason I don't expect 2x immediately: Maxwell is a gaming focused chip design, as JHH has said so himself. They were stuck on 28nm and so they had to go for peak efficiency on this node, with a design that best fits their constraints. Thus, Maxwell is very strong gaming perf/w, but poorly for DP & mix-mode compute hence NV stuck with Kepler for Teslas.

Pascal is a compute-focused design first and foremost, given the publicity from NV on this chip has all revolved around how strong it is for HPC. This means there is a sacrifice in die area & transistors to shift the focus compute, thus they won't benefit fully from the node shrink just in terms of gaming performance.

In actuality, if they just shrunk Maxwell straight to 16nm FF, you could expect very strong gains in gaming perf immediately (at least 1.75x and eventually >2x).
 
Last edited:
Nobody-knows.jpg
 
The reason I don't expect 2x immediately: Maxwell is a gaming focused chip design, as JHH has said so himself. They were stuck on 28nm and so they had to go for peak efficiency on this node, with a design that best fits their constraints. Thus, Maxwell is very strong gaming perf/w, but poorly for DP & mix-mode compute hence NV stuck with Kepler for Teslas.

nVidia is selling Maxwell as Tesla, too. Facebook is using them for their deep networking approach.
The only difference between Maxwell and Kepler are the missing dedicated DP units. Otherwise Maxwell is the much better compute architecture.

Pascal is a compute-focused design first and foremost, given the publicity from NV on this chip has all revolved around how strong it is for HPC. This means there is a sacrifice in die area & transistors to shift the focus compute, thus they won't benefit fully from the node shrink just in terms of gaming performance.
Nonsense. Kepler is better than Fermi with Compute and graphics. Maxwell is better than Kepler with FP32 problems and graphics. Pascal will be just better than Maxwell in everything.

In actuality, if they just shrunk Maxwell straight to 16nm FF, you could expect very strong gains in gaming perf immediately (at least 1.75x and eventually >2x).
You know what is really funny? Neither Tonga nor Fiji has better DP performance than their preprocessor. Yet we are hearing nothing from you that Polaris will be just 1.5x better. :sneaky:
 
Last edited:
@sontin
Did you miss out on the Maxwell launch when JH explain it? They went for pure gaming peak performance at the expense of compute. Unless you are calling JH a liar... -_-

Thus the opposite of such a thing would be true. If you focus peak compute performance, that will be at the expense of metrics such as gaming perf per mm2, per w, etc etc.

It's not complex man. Chips are trade-offs.
 
So, why is GM204 better for compute than GK104? Or why is GM200 better than GK110?

Context is everything. nVidia launched GM204 at a gaming event. They launched GM200 as Titan X at a developer event for graphics and two weeks later for compute.

They didnt sacrifice anything related to compute. Maxwell is just the better compute architecture.

This comes directly from them:
Maxwell is NVIDIA's next-generation architecture for CUDA compute applications. Maxwell introduces an all-new design for the Streaming Multiprocessor (SM) that dramatically improves energy efficiency. Improvements to control logic partitioning, workload balancing, clock-gating granularity, compiler-based scheduling, number of instructions issued per clock cycle, and many other enhancements allow the Maxwell SM (also called SMM) to far exceed Kepler SMX efficiency.
Maxwell retains and extends the same CUDA programming model as in previous NVIDIA architectures such as Fermi and Kepler, and applications that follow the best practices for those architectures should typically see speedups on the Maxwell architecture without any code changes.
https://developer.nvidia.com/maxwell-compute-architecture
 
Last edited:
So, why is GM204 better for compute than GK104? Or why is GM200 better than GK110?

Better for compute should have an * for context.

It's gimped where the high end HPC is concerned. Where's GM200 Tesla?

It's no coincidence NV didn't produce a Maxwell SKU for GK110 Tesla replacement, because it can't actually replace it.

You have a strange penchant for revisionism, when it's well known Maxwell is a compromised design to get peak gaming performance at the expense of mix-mode compute and DP compute, now you are calling it "better for compute".
 
Back
Top