• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

NVIDIA Pascal Thread

Page 26 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
The irony is that the slang/jargon use of literally is done so to create hyperbole. Hyperbole being defined as 'exaggerated statements or claims not meant to be taken literally.' I think this qualifies as meta. It also probably creates a paradox every time it's done. You could be destroying worlds in other dimensions!
 
Well, that was all GM104 brought to the table. Better perf/W. nVidia made a killing with it.

You're right, they improved performance by about 35% in many games when you compare GTX 970 to GTX 770, keeping the GTX 770 TDP, on the same node! Now tell me, how is it unrealistic to expect GTX 1070 to beat GTX 980 Ti/Titan X by 30% with new architecture and jump from 28nm to 16nm, keeping GTX 970 TDP? 😵
 
Last edited:
The irony is that the slang/jargon use of literally is done so to create hyperbole. Hyperbole being defined as 'exaggerated statements or claims not meant to be taken literally.' I think this qualifies as meta. It also probably creates a paradox every time it's done. You could be destroying worlds in other dimensions!

You're scaring the piss out of me. I just want a couple big Pascals and Skylake-E. That's all I want.
 
You're right, they improved performance by about 35% in many games when you compare GTX 970 to GTX 770, keeping the GTX 770 TDP, on the same node! Now tell me, how is it unrealistic to expect GTX 1070 to beat GTX 980 Ti/Titan X by 30% with new architecture and jump from 28nm to 16nm? 😵

Well, I was thinking more 780 to 980.
 
Any report on how many CUDA cores these boards will have? I have a recently purchased 9801 Ti Hydro and don't see any need to upgrade any time soon but for video editing CUDA cores are important.


Brian
 
You're right, they improved performance by about 35% in many games when you compare GTX 970 to GTX 770, keeping the GTX 770 TDP, on the same node! Now tell me, how is it unrealistic to expect GTX 1070 to beat GTX 980 Ti/Titan X by 30% with new architecture and jump from 28nm to 16nm, keeping GTX 970 TDP? 😵

This.
 
The irony is that the slang/jargon use of literally is done so to create hyperbole. Hyperbole being defined as 'exaggerated statements or claims not meant to be taken literally.' I think this qualifies as meta. It also probably creates a paradox every time it's done. You could be destroying worlds in other dimensions!

Haha, this post wins. I'm also non-native English speaker, but when I hear someone use "literally" in a conversation/post - I already know to roll my eyes because the following sentence will be an exaggeration. I find myself doing it. Oh you crazy Americans!
 
So they use the word "literally" to verbally italicize the sentence that follows the word. So if I say, "I would literally sell my children to buy two big Pascal GPUs" what I really mean is, "I would neglect all responsibilities to my family and let my kids eat ramen for 6 months so I can buy two big Pascal GPUs". The latter is literally correct.
 
If Pascal launch really happens in May i assume we should already have tons of information 5 weeks before? There are no hints on starting Mass production. Also GP100 utilizing hbm2. How can it be released (even as a Quadro card) without reasonable hbm2 production process?
 
If Pascal launch really happens in May i assume we should already have tons of information 5 weeks before? There are no hints on starting Mass production. Also GP100 utilizing hbm2. How can it be released (even as a Quadro card) without reasonable hbm2 production process?

Yup, AIBs would have samples months before launch and more concrete leaks should be here already. A May paper launch (architecture deepdive anyone?) is more likely at this point.
 
If Pascal launch really happens in May i assume we should already have tons of information 5 weeks before? There are no hints on starting Mass production. Also GP100 utilizing hbm2. How can it be released (even as a Quadro card) without reasonable hbm2 production process?

Most rumours mention a post-Computex launch, only Bench-life is saying late-May (so >8 weeks) for reference cards and July+ for custom.
 
I need to be sincere.

AMD and nVIDIA won't release any new card on the new process until July from this year. They will revamp EVERY card they sell, even the GT1010 from nVIDIA and the AMD R5 430. I doubt that we will see any rebrands by now. I mean. A whole new process will change everything.

I was expecting the GDDR5 die for real, leaving the trail for GDDR5X or HBM1 as the new minimum. Sadly that is not the case.

I won't but the first cards unless it contains the GDDR5X or HBM1 at minimum, why? Because the improvements won't be dramatic.


Now to this topic.

I see this path in terms of price and performance.

Old -- New -- Price
--- -- Titan Pascal -- USD 1500
Titan X. -- GTX 1080Ti. -- USD 900
Titan 980Ti. -- GTX 1080. -- USD 700
Titan 980. -- GTX 1070. -- USD 500


And so on...
 
Last edited:
But people dont buy cards for performance/watt. They buy for more performance. They care for perf/wattage ratio, only if they can have more performance at same wattage. Not equal performance at lower wattage. Why would someone waste another 400 USD/EUROs on 1070, if they already paid 700 for 980Ti. To save 100 Watts? 😀
I don't know... Ever since Nvidia became more efficient in the performance/watt category, energy usage has suddenly become THE most important metric to lots of people when buying a video card. Despite the fact that it didn't seem to matter whatsoever during the years that AMD/ATi was more economical in the energy sipping category.
 
I don't know... Ever since Nvidia became more efficient in the performance/watt category, energy usage has suddenly become THE most important metric to lots of people when buying a video card. Despite the fact that it didn't seem to matter whatsoever during the years that AMD/ATi was more economical in the energy sipping category.

It began with servers with K8. Then it moved on to mobile with Pentium M. After that it came to the desktop via Core 2. And now its here with GPUs as well. You shouldn't be surprised.

http://hothardware.com/news/amd-pok...gpu-heat-output-in-the-misunderstanding-video

That's 2010...
 
I don't know... Ever since Nvidia became more efficient in the performance/watt category, energy usage has suddenly become THE most important metric to lots of people when buying a video card. Despite the fact that it didn't seem to matter whatsoever during the years that AMD/ATi was more economical in the energy sipping category.

This is really a simplistic answer to a more complex question.

Look back at previous flagships and find 2x 8pin connectors 10 years ago. Oh, you won't find them! 😛 (even the 9800pro was <50w TDP) Efficiency didn't matter a whole lot between then and the next 10 years as GPUs just kept adding die space and more power connectors, as the TDP went up. Eventually, they couldn't add more without focusing on a more efficient architecture. A lot of 'new' GPUs 5-6 years ago were essentially the same as the old one, just 50% bigger and on a new process node. Boom! More performance and more power usage, but slightly more efficient...

The reason efficiency became important is that it was required to (1) share architectures between mobile and desktop and (2) allow the MOST performance in the power envelope available.

I think efficiency is great as long as we get options. By that I mean we can get the best possible performance at all levels, low and high power.

You seem bitter that AMD picked the wrong time to shift the focus away from efficiency. That was a HUGE mistake and has cost them dearly. They are coming back around and have made that front and center and that will be critical to winning-back designs for mobile and desktop market share.

If you think 'efficiency' is just fanboyism, you are dead wrong. Like NV or not, they made a compelling decision to focus on efficiency and market that accordingly.
 
This is really a simplistic answer to a more complex question.

Look back at previous flagships and find 2x 8pin connectors 10 years ago. Oh, you won't find them! 😛 (even the 9800pro was <50w TDP) Efficiency didn't matter a whole lot between then and the next 10 years as GPUs just kept adding die space and more power connectors, as the TDP went up. Eventually, they couldn't add more without focusing on a more efficient architecture. A lot of 'new' GPUs 5-6 years ago were essentially the same as the old one, just 50% bigger and on a new process node. Boom! More performance and more power usage, but slightly more efficient...

The reason efficiency became important is that it was required to (1) share architectures between mobile and desktop and (2) allow the MOST performance in the power envelope available.

I think efficiency is great as long as we get options. By that I mean we can get the best possible performance at all levels, low and high power.

You seem bitter that AMD picked the wrong time to shift the focus away from efficiency. That was a HUGE mistake and has cost them dearly. They are coming back around and have made that front and center and that will be critical to winning-back designs for mobile and desktop market share.

If you think 'efficiency' is just fanboyism, you are dead wrong. Like NV or not, they made a compelling decision to focus on efficiency and market that accordingly.

I think his point was, that AMD/Ati used to have more power-efficient architecture back in the 5870/Fermi days and same people, who now cant praise Nvidia enough for this very reason did not seem to care that much about it back then.
Which may be true or not, the only thing given, there were fanboys back then and there are fanboys now. Fanboys of both camps.
 
I think his point was, that AMD/Ati used to have more power-efficient architecture back in the 5870/Fermi days and same people, who now cant praise Nvidia enough for this very reason did not seem to care that much about it back then.
Which may be true or not, the only thing given, there were fanboys back then and there are fanboys now. Fanboys of both camps.

AMD used to have a lot more of the market then too, perhaps the power efficient architecture was part of that reason whatever the Nvidia fanboys at that time claimed.
 
They will revamp EVERY card they sell, even the GT1010 from nVIDIA and the AMD R5 430. I doubt that we will see any rebrands by now. I mean. A whole new process will change everything.

They have sold older node cards as rebrands previously. They will do it again, because 28 is cheaper, relatively. They will take 960 chips and fuse them off to 64-bit X05 cards if they want to.
 
Who cares what node a GPU that's really just a breakout box for a few monitors is? The only people who care are the ones trying to figure out the cheapest way to make the things. They'll keep reheating the same leftovers till not even the dog will eat them.
 
I need to be sincere.

AMD and nVIDIA won't release any new card on the new process until July from this year. They will revamp EVERY card they sell, even the GT1010 from nVIDIA and the AMD R5 430. I doubt that we will see any rebrands by now. I mean. A whole new process will change everything.

I was expecting the GDDR5 die for real, leaving the trail for GDDR5X or HBM1 as the new minimum. Sadly that is not the case.

I won't but the first cards unless it contains the GDDR5X or HBM1 at minimum, why? Because the improvements won't be dramatic.


Now to this topic.

I see this path in terms of price and performance.

Old -- New -- Price
--- -- Titan Pascal -- USD 1500
Titan X. -- GTX 1080Ti. -- USD 900
Titan 980Ti. -- GTX 1080. -- USD 700
Titan 980. -- GTX 1070. -- USD 500


And so on...


That makes no sense. You're telling me they're going to have the exact same performance and the exact same price for the newer cards? That won't happen. Even if it was a rebrand, they would at least drop the price by one tier.
 
Back
Top