I meant it as a big die vs. big die comparison even if a 500+mm^2 Pascal doesn't launch this year...the best Pascal GPU model that will ever become available.Such a bloated question. It could be staggered release like Kepler and Maxwell, which sees a mid-range dies first and then the big dies some months later down the line.
This poll won't make dreams come true so please keep it real. :biggrin:Is this what I want, or what I think it might be. Because 10x pretty please. ;-)
I meant it as a big die vs. big die comparison even if a 500+mm^2 Pascal doesn't launch this year...the best Pascal GPU model that will ever become available.
Darn. I knew should have added a 12x option for all of those who thought the 10x option was obvious AND believe the big Pascal reference design will feature CLC/water-cooling. :biggrin:Everyone knows the answer is 10x.
I mean, new u-arch, that's automatically 1.5x gain. HBM2 should add another 2x gains on top of that. Then a node shrink, bam 10x!
I meant it as a big die vs. big die comparison even if a 500+mm^2 Pascal doesn't launch this year...the best Pascal GPU model that will ever become available.
780 Ti essentially doubled 580 performance:
https://tpucdn.com/reviews/NVIDIA/GeForce_GTX_780_Ti/images/perfrel_2560.gif
I expect a 680 or a 980 rather than a 780 Ti or a Titan X this year. But big Pascal should be 100% faster, or more since it has the benefit of a new faster memory that Kepler did not.
I think those poll choices are silly. I'm sure the worst pessimists expect way more than 25%, but who would expect more than 3x, let alone 10x.
And we are still waiting for someone to reply "10x because Jen-Hsun Huang told me so!"eRacer put those options there most likely to see how many people were "suckered in" by the "Pascal is 10x Maxwell" marketing hype from GTC '15, inspired by arguments he had with NVIDIA haters over on the S|A forums.
http://semiaccurate.com/forums/showthread.php?t=9032&page=11
780 Ti essentially doubled 580 performance:
https://tpucdn.com/reviews/NVIDIA/GeForce_GTX_780_Ti/images/perfrel_2560.gif
I expect a 680 or a 980 rather than a 780 Ti or a Titan X this year. But big Pascal should be 100% faster, or more since it has the benefit of a new faster memory that Kepler did not.
I think those poll choices are silly. I'm sure the worst pessimists expect way more than 25%, but who would expect more than 3x, let alone 10x.
And we are still waiting for someone to reply "10x because Jen-Hsun Huang told me so!"![]()
Big Pascal vs Big Maxwell:
Probably 1.5x early on and eventually x2+ due to maturation of drivers & games or features better suited to the new Pascal uarch.
The reason I don't expect 2x immediately: Maxwell is a gaming focused chip design, as JHH has said so himself. They were stuck on 28nm and so they had to go for peak efficiency on this node, with a design that best fits their constraints. Thus, Maxwell is very strong gaming perf/w, but poorly for DP & mix-mode compute hence NV stuck with Kepler for Teslas.
Nonsense. Kepler is better than Fermi with Compute and graphics. Maxwell is better than Kepler with FP32 problems and graphics. Pascal will be just better than Maxwell in everything.Pascal is a compute-focused design first and foremost, given the publicity from NV on this chip has all revolved around how strong it is for HPC. This means there is a sacrifice in die area & transistors to shift the focus compute, thus they won't benefit fully from the node shrink just in terms of gaming performance.
You know what is really funny? Neither Tonga nor Fiji has better DP performance than their preprocessor. Yet we are hearing nothing from you that Polaris will be just 1.5x better. :sneaky:In actuality, if they just shrunk Maxwell straight to 16nm FF, you could expect very strong gains in gaming perf immediately (at least 1.75x and eventually >2x).
https://developer.nvidia.com/maxwell-compute-architectureMaxwell is NVIDIA's next-generation architecture for CUDA compute applications. Maxwell introduces an all-new design for the Streaming Multiprocessor (SM) that dramatically improves energy efficiency. Improvements to control logic partitioning, workload balancing, clock-gating granularity, compiler-based scheduling, number of instructions issued per clock cycle, and many other enhancements allow the Maxwell SM (also called SMM) to far exceed Kepler SMX efficiency.
Maxwell retains and extends the same CUDA programming model as in previous NVIDIA architectures such as Fermi and Kepler, and applications that follow the best practices for those architectures should typically see speedups on the Maxwell architecture without any code changes.
So, why is GM204 better for compute than GK104? Or why is GM200 better than GK110?