• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

NVIDIA Pascal Thread

Page 116 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I'll predict 1080 will have same performance as a 980 with a lot less power consumption.

(Too lazy to sell my 980tis and buy 1080s) 😀
 
I wonder what kind of goodies they give away at these events since it's held in Austin and open to the public. I live between San Antonio and Austin and would consider the drive up to watch it if it's typical to get coupons for the 1000-series cards or game codes or anything like that.

13123364_1337089659643821_8426238728409541307_o.jpg

Not worth the drive unless you are super hyped for launch or something. Which given the location and "first come first serve" means you're out of luck if you show up on time.
 
I'm afraid you guys are all getting a bit too optimistic. There's no evidence, nor a precedent based on 480->580->680->780->980, for the speed gains you're talking about. The leaked benched showed a ridiculously-overclocked 1080 just beating a moderately overclocked 980 Ti.

So here's what I think it will look like:

GTX 1080 ($500) = GTX 980 Ti+10%
GTX 1070 ($370) = GTX 980+20% (i.e., GTX 980 Ti - 10%)
GTX 1060 Ti ($250) = GTX 970+10% (i.e., GTX 980 Ti - 40%)

There are pricepoints that Nvidia knows it has to hit to sell product. It's not operating in a vacuum, even without competition from AMD at the high end.

And anyone expecting 980Ti-class performance from the 1060 Ti is going to be seriously disappointed, and will be posting on this forum tonight how "Pascal=fail". Building up unrealistic expectations doesn't make Nvidia culpable here, folks.
 
They just put up an Order of 10 thing on GeForce.com. I guess that's what the countdown was for?

I know why they used those words for the event title and so do you. Subliminal marketing deep mind suggestive coercion technique to make you buy multiple cards. 10 is too much, but with 10 in your mind, 2,3 or 4 cards sounds like a good deal and low cost.

Nvidia whispers in your mind, "Psst! Hey Guys! Go ahead and order 10!"
 
Gtx1070 45% faster than a gtx970
Gtx1080 45% faster than a gtx 980
That's with directx 11 games.

55% faster for both with the latest directx 10 games.
1080 will do even better in 4k res.
 
I know why they used those words for the event title and so do you. Subliminal marketing deep mind suggestive coercion technique to make you buy multiple cards. 10 is too much, but with 10 in your mind, 2,3 or 4 cards sounds like a good deal and low cost.

Nvidia whispers in your mind, "Psst! Hey Guys! Go ahead and order 10!"
Maybe is the ammount of cards they will sell in the beginning since Apple is taking near all the wafers of 16 NM...
 
Gtx1070 45% faster than a gtx970
Gtx1080 45% faster than a gtx 980
That's with directx 11 games.

55% faster for both with the latest directx 10 games.
1080 will do even better in 4k res.

You mean DX12 instead of DX10, right? You're probably spot on. The jump between Maxwell and Pascal will be about the same as the jump between Kepler and Maxwell. It seems disappointing on the surface considering the new node, but in the context of timings both Kepler and Fermi were on the market longer than Maxwell so it's not unexpected. If 16nm FF is prevalent for as long as 28nm was, expect Volta to have a similar, or slightly smaller, jump over Pascal than Pascal over Maxwell.

Instead of 2x performance increases every 24 months, we're getting 50% increases every 18 months.
 
and you think they gonna give a 40+% jump on a first gen of a new node?you do realise that they will milk the new nodes for at least 3-4 years eh..
 
You mean DX12 instead of DX10, right? You're probably spot on. The jump between Maxwell and Pascal will be about the same as the jump between Kepler and Maxwell. It seems disappointing on the surface considering the new node, but in the context of timings both Kepler and Fermi were on the market longer than Maxwell so it's not unexpected. If 16nm FF is prevalent for as long as 28nm was, expect Volta to have a similar, or slightly smaller, jump over Pascal than Pascal over Maxwell.

Instead of 2x performance increases every 24 months, we're getting 50% increases every 18 months.
You know 50% above GTX980 is 25-30% faster than 980TI
50% above 970 is 980TI+5-10%.
GTX980TI is 43% faster than GTX970 and 23% faster than GTX980
perfrel_2560_1440.png
 
Last edited:
You know 50% above GTX980 is 25-30% faster than 980TI
50% above 970 is 980TI+5-10%.
GTX980TI is 43% faster than GTX970 and 23% faster than GTX980

I'm comparing chips that are the successors to the previous generation's chip. GM204 was about 50-60% faster than GK104 at the time of it's release. About 18 months later (20 months to be more precise) we're getting a chip about 50% faster than GM204. GM200 was about 50% faster than GK110. If GP100 comes to market as Titan 3 this fall, then it will have been about 18-20 months after GM200, we'll get ~50-60% more performance than GM200. That is what I am referring to.
 
Last edited:
I'm comparing chips that are the successors to the previous generation's chip. GM204 was about 50-60% faster than GK104 at the time of it's release. About 18 months later (20 months to be more precise) we're getting a chip about 50% faster than GM204. GM200 was about 50% faster than GK110. If GP100 comes to market as Titan 3 this fall, then it will have been about 18-20 months after GM200, we'll get ~50-60% more performance than GM200. That is what I am referring to.

The GM204 came around two and a half years after the GK104 and increased die area by around 33% and transistor count went up to 5.2 billion transistors from 3.5 billion transistors which was nearly a 50% increase.

Transistor count is what you need to look at.

The GK104 had around 500 million transistors more than the GF110,or nearly a 20% increase and Techpowerup put the GTX680 at around 20% faster than the GTX580 at launch:

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/27.html

That's is the problem - at 320mm2 the GP104 is probably going to have similar transistor count to a GM200 or around 10% to 20% more.

Whereas,I can see 20% to 30% better performance due to design improvements,but anymore at launch would seem quite big.
 
Wouldn't NV likely have a crazy margin on this chip considering it being so small, and after all 16nm being quite mature since its been in production for 1-2 years (although in different and smaller kinds of chips admittedly)?
 
High clocks totally destroy the power efficiency though.

Wouldn't NV likely have a crazy margin on this chip considering it being so small, and after all 16nm being quite mature since its been in production for 1-2 years (although in different and smaller kinds of chips admittedly)?

Not really, 16FF isn't much cheaper on a per transistor basis. Basically GP104 costs nVidia about as much as GM200 did, and they are pricing the 1070 and 1080 accordingly.
 
Wouldn't NV likely have a crazy margin on this chip considering it being so small, and after all 16nm being quite mature since its been in production for 1-2 years (although in different and smaller kinds of chips admittedly)?

As noted, probably not especially. In the end they will also be the fastest GPUs available, and probably roughly the fastest that can be sanely made for the consumer market at the current time too.

The question is going to be more if the price comes down when the bigger HBM2 equipped stuff arrives in 6-12 months time. You'd think so a priori, but the 970/80 managed to keep their prices so....
 
Back
Top