Nvidia ,Rtx2080ti,2080,2070, information thread. Reviews and prices September 14.

Page 51 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
When are reviews coming?

https://videocardz.com/77696/exclusive-nvidia-geforce-rtx-2080-ti-editors-day-leaks
"NVIDIA has set the date for the full reviews to September 14th."

Which matches embargo date on leaked slides like this one:
NVIDIA-Turing-vs-Pascal-Shader-Performance.jpg
 
  • Like
Reactions: RichUK and Malogeek

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Remember the days when "IPC" increase wasn't considered a factor for GPUs? Those were the days when process scaling brought relatively easy and large improvements, and TDP was only at 150W or so, which meant there was room to increase it further.

Nvidia said they put lot of effort in optimizing the circuitry in Pascal to get the clock speeds that high. This means a straight shrink from 28nm to 14/16nm wasn't enough to get the desired clocks. In the old days, the time and engineering resources could have been used to further improve efficiency and performance. Nowadays, even increasing clocks isn't trivial anymore.

Knowing that, the engineering feat achieved with Turing is impressive, as whatever advantage it has over Pascal is done using pretty much the same process.

What people do not find good is the price.
 

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
At least they're giving people opportunity to cancel their pre-orders if they're not impressed. Hopefully reviewers have more than a couple of days with hardware, AND that reviewers have access to a game with RTX support.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
And there looks to be an IPC increase vs Pascal, which is the previous architecture.

Previous architecture was Volta. They probably skipped consumer Volta because it would have been even less impressive. Doing that isn't a bad thing. Just don't fool yourselves.
 

RichUK

Lifer
Feb 14, 2005
10,341
678
126
Remember the days when "IPC" increase wasn't considered a factor for GPUs? Those were the days when process scaling brought relatively easy and large improvements, and TDP was only at 150W or so, which meant there was room to increase it further.

Nvidia said they put lot of effort in optimizing the circuitry in Pascal to get the clock speeds that high. This means a straight shrink from 28nm to 14/16nm wasn't enough to get the desired clocks. In the old days, the time and engineering resources could have been used to further improve efficiency and performance. Nowadays, even increasing clocks isn't trivial anymore.

Knowing that, the engineering feat achieved with Turing is impressive, as whatever advantage it has over Pascal is done using pretty much the same process.

What people do not find good is the price.

I may be wrong, but the main advantage of shrinking dies is to reduce power and increase transistor count which is key due to the ever parallel processing workloads of GPUs, not so much increasing clocks.
 

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
Previous architecture was Volta. They probably skipped consumer Volta because it would have been even less impressive. Doing that isn't a bad thing. Just don't fool yourselves.
It's only the previous architecture in your head because it has a different name. Turing has the same cache and concurrent shader layout as Volta. People are too stuck on codenames to indicate generations. It's the same shader core and same process, there's no reason to consider Turing a "generation" leap above Volta, which didn't even have a consumer version.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
I may be wrong, but the main advantage of shrinking dies is to reduce power and increase transistor count which is key due to the ever parallel processing workloads of GPUs, not so much increasing clocks.

Ok, modern chips are so complex you can't boil everything down to a single thing. For example I've seen plenty claim that Parallelism can be used to improve efficiency by reducing voltage. Sure it can, but to a limit. GPUs run at ~1.05V with peak clocks. You can't drop that voltage by half and expect 1/2 clocks. You'll get something in the 100-200MHz range, and lose efficiency. At something below 900mV is when clocks start to reduce drastically. So there's an optimal point to everything.

Nvidia said in Pascal, significant work has been done to optimize circuitry, so it can boost clocks. Clearly clocks are important. Everyone would increase them if they can/should.

It's the same shader core and same process, there's no reason to consider Turing a "generation" leap above Volta,

You don't know this. They haven't revealed details on Turing SMs. If they did something as simple as increasing caches inside the SMs, then its no longer identical.
 

TheF34RChannel

Senior member
May 18, 2017
786
310
136
Remember the days when "IPC" increase wasn't considered a factor for GPUs? Those were the days when process scaling brought relatively easy and large improvements, and TDP was only at 150W or so, which meant there was room to increase it further.

Nvidia said they put lot of effort in optimizing the circuitry in Pascal to get the clock speeds that high. This means a straight shrink from 28nm to 14/16nm wasn't enough to get the desired clocks. In the old days, the time and engineering resources could have been used to further improve efficiency and performance. Nowadays, even increasing clocks isn't trivial anymore.

Knowing that, the engineering feat achieved with Turing is impressive, as whatever advantage it has over Pascal is done using pretty much the same process.

What people do not find good is the price.

I suspect that from now on the focus will be more on (improving) features rather than mere frequency increases - you can only go so far under the current circumstances with those I suppose.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
But when die sizes shoot up to the size of the next tier, you shouldn't be surprised to pay next tier pricing. If you expect a company to just eat into margins to give you the same prices you are living in a dreamland.

Economic theory says that, in a perfectly competitive market, costs should drop to the marginal cost of production plus "normal profits" (defined as the minimum profit that is necessary to cover the opportunity cost of the owner or stockholders). In other words, more competitive markets tend to have much lower profit margins than less competitive ones. The very high gross margins that we see on Nvidia products is the result of lack of effective competition from AMD, plain and simple. If AMD was competitive, then Nvidia would indeed "eat into margins" to lower prices and maintain market share.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
I suspect that from now on the focus will be more on (improving) features rather than mere frequency increases - you can only go so far under the current circumstances with those I suppose.

Modern CPUs can easily exceed 4 GHz. Pascal, the best-clocking current GPU architecture, can barely do half that on a good day. Surely there's some more room for improvement.
 

sze5003

Lifer
Aug 18, 2012
14,320
683
126
Is there a reason Nvidia has formed a list of approved reviewers mentioned here? Does this mean the reviews we should pay more attention to are those that weren't provided hardware from Nvidia?

https://m.hardocp.com/article/2018/08/28/nvidia_controls_aib_launch_driver_distribution/

I find it odd that they are going out of their way to gather so much info on who the company wants to review their product? Has it always been like this?
 

TheF34RChannel

Senior member
May 18, 2017
786
310
136
Modern CPUs can easily exceed 4 GHz. Pascal, the best-clocking current GPU architecture, can barely do half that on a good day. Surely there's some more room for improvement.

Comparing two different types of hardware isn't really fair, but yes; there's room for improvement. My suspicion is maybe not as much as we're used to and they [Nvidia] may choose to go with features over clock speed. This is how it comes across to me.

Is there a reason Nvidia has formed a list of approved reviewers mentioned here? Does this mean the reviews we should pay more attention to are those that weren't provided hardware from Nvidia?

https://m.hardocp.com/article/2018/08/28/nvidia_controls_aib_launch_driver_distribution/

I find it odd that they are going out of their way to gather so much info on who the company wants to review their product? Has it always been like this?

I wouldn't take the article at face value (word for word), but it happens across the board - control is power and exercising that power. What I read is pretty lame and unprofessional alright.
 

sze5003

Lifer
Aug 18, 2012
14,320
683
126
Comparing two different types of hardware isn't really fair, but yes; there's room for improvement. My suspicion is maybe not as much as we're used to and they [Nvidia] may choose to go with features over clock speed. This is how it comes across to me.



I wouldn't take the article at face value (word for word), but it happens across the board - control is power and exercising that power. What I read is pretty lame and unprofessional alright.
Well I'm guessing that an NDA is required but for 5 years is odd. I guess you can't really complain if you are given free samples. In my opinion these tech reviewers should go out and buy the cards and then no nda is required and they can freely review. From what I understand consumer reports /reviews doesn't accept free samples as it can skew reviews.
 

DeathReborn

Platinum Member
Oct 11, 2005
2,786
789
136
This timespy leak, if true, basically implies 0 IPC improvement, at least over Volta async improvements.

~10,000 score for GTX 2080 would mean around 14,000-15,000 score for 2080 Ti.

Titan V already scores >14,500 at around 1650MHz boost clocks. So a 2080 Ti with a open air cooler hitting even high clocks should be near 15k mark, even with 0 IPC improvement, which is what this leak implies.

I assume you factored in the fact that Volta has ~800 more cores than the 2080 Ti. I am not expecting much faster than Titan V, maybe 5% or so.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
I assume you factored in the fact that Volta has ~800 more cores than the 2080 Ti. I am not expecting much faster than Titan V, maybe 5% or so.

Yes, how terrible is it that it might only be the same speed as a $3000 card with 800 more cores. :D

Some people are really stretching to find new complaints.
 
  • Like
Reactions: TheF34RChannel

moonbogg

Lifer
Jan 8, 2011
10,734
3,454
136
I selected your post because it sums up the cost debate in its entirety, so it's not a direct reply to you alone.

It appears that by keeping the names the same Nvidia created this illusion of price hikes. If they'd named the Ti ultra/extreme/whatever no one would be having this idea; they'd know they'd be buying Titan class performance.

The fact they launched the Ti at launch, and it being TU102 are dead giveaways that in the past this would be named Titan.

One could say what's in a name, but this is a perfect example of how much a name can do to customers.

The internet seems devided between those who are aware of the above and those who only look at the name. No offence to anyone, just an observation.

The least that Nvidia could have done is explain the current segments plus naming. Better would've been to have had new naming schemes.

This doesn't make X70 buyers feel any better about their upgrade option being priced at $600. Six. Hundred. Dollars. For an X70 card. Anyone trying to rationalize these prices can go right ahead and throw themselves a party, but I'm not coming. I'm grabbing my stuff and I'm walking home, and no, you can't give me a ride.
 

JasonLD

Senior member
Aug 22, 2017
488
447
136
I dont have to know the exact wafer price today to know that wafer price two and a half years later is not the same at the same node. TSMC 16nm and 12nm are the same node (same rules, same equipment etc etc), price of 16nm wafers today (H2 2018) are cheaper than H1 2016. Also yields today are higher than H1 2016. Everybody knows that, i dont need to backup anything here. ;)

If you can't back up your claim with actual evidence from a credible source, it is not a fact. "I don't need to backup anything here" is a poor way to prove your point when you are assuming those price based on previous trends. I would say TU104 is probably cheaper to produce than GP104, but no one really knows now much Nvidia is paying TSMC for it, which is more important factor for price.
 
Last edited:

gdansk

Diamond Member
Feb 8, 2011
4,792
8,084
136
Modern CPUs can easily exceed 4 GHz. Pascal, the best-clocking current GPU architecture, can barely do half that on a good day. Surely there's some more room for improvement.
It's possible. But most graphics problems are embarrassingly parallel. This means GPUs can scale with more execution units almost trivially. If yields allow, it's easier to design 80 cores running at 1.5 GHz than 40 cores running at 3 GHz. Plus dynamic power consumption increases proportionally by the frequency and by the square of voltage. On the same process, you'll need higher voltage to hit 3 GHz (even if it's only +0.1v). If you want performance per watt, then you go wide. That's why Nvidia's full-size GPUs are nearly as big as current manufacturing techniques allow but clocked modestly.

CPUs do not follow the same logic. Finding more instruction level parallelism is extremely difficult. Code has too many branches and data dependencies. They could go wider, but the extra ALUs would almost always be executing highly speculative instructions. And that's wasting more energy and die space because those instructions will be thrown away more often than not.
 
Last edited:

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Is there a reason Nvidia has formed a list of approved reviewers mentioned here? Does this mean the reviews we should pay more attention to are those that weren't provided hardware from Nvidia?

https://m.hardocp.com/article/2018/08/28/nvidia_controls_aib_launch_driver_distribution/

I find it odd that they are going out of their way to gather so much info on who the company wants to review their product? Has it always been like this?
All this is about is [H] being able to release a day 1 review. They really want a day 1 review for the clicks, and because they are primarily a gpu website - this is what they do best. They wanted to say "Oh Nvidia won't give us a card because they hate us due to GPP" but when they asked Nvidia they said sure just sign the NDA like everyone else. However they made such a big fuss about the NDA they now can't sign it without looking stupid (although they really want too, or why did they put a poll on the forum).

So they think plan b - we use our contacts to get a card from an AIB early. Only Nvidia are locking down the pre-release drivers so even if they got the card they couldn't get a driver to test it with. So they release some anti-nvidia article. But tbh this should all be beneath them - it's gone beyond just informing their readership and is approaching fudzilla like levels of report anything to get a gpu fanboy argument and hence drive more clicks.

In the end it doesn't matter too us - everyone will have the cards soon enough. We can read the day 1 reviews or wait a week for the reviews Nvidia has no control over at all, not a big deal. It does matter to [H] as being able to do a day 1 review of the biggest gpu release for years is something that is important to them.
 
Last edited:

SirDinadan

Member
Jul 11, 2016
108
64
71
boostclock.com
One has to consider not only die size but ray tracing ecosystem costs as well with RTX pricing. All the ray tracing gurus, experts are on NVIDIA's payroll. Not only a few, 99% of them.
+ game devs don't add these last minute RTX effects to their games for free too. And check out how many professional software will get updated to utilize RTX. You pay for it as well when you buy an NVIDIA GPU.
 

sze5003

Lifer
Aug 18, 2012
14,320
683
126
All this is about is [H] being able to release a day 1 review. They really want a day 1 review for the clicks, and because they are primarily a gpu website - this is what they do best. They wanted to say "Oh Nvidia won't give us a card because they hate us due to GPP" but when they asked Nvidia they said sure just sign the NDA like everyone else. However they made such a big fuss about the NDA they now can't sign it without looking stupid (although they really want too, or why did they put a poll on the forum).

So they think plan b - we use our contacts to get a card from an AIB early. Only Nvidia are locking down the pre-release drivers so even if they got the card they couldn't get a driver to test it with. So they release some anti-nvidia article. But tbh this should all be beneath them - it's gone beyond just informing their readership and is approaching fudzilla like levels of report anything to get a gpu fanboy argument and hence drive more clicks.

In the end it doesn't matter too us - everyone will have the cards soon enough. We can read the day 1 reviews or wait a week for the reviews Nvidia has no control over at all, not a big deal. It does matter to [H] as being able to do a day 1 review of the biggest gpu release for years is something that is important to them.
This makes sense although I would prefer review sites would buy their own cards and review them rather than be given samples. I want an objective review as I think most people would. The phone review industry is similar. I guess if you think day 1 reviews may be skewed it wouldn't be an issue to wait another week for other reviewers to get theirs in.
 

arandomguy

Senior member
Sep 3, 2013
556
183
116
Is there a reason Nvidia has formed a list of approved reviewers mentioned here? Does this mean the reviews we should pay more attention to are those that weren't provided hardware from Nvidia?

https://m.hardocp.com/article/2018/08/28/nvidia_controls_aib_launch_driver_distribution/

I find it odd that they are going out of their way to gather so much info on who the company wants to review their product? Has it always been like this?

If you want another viewpoint -

https://forums.guru3d.com/threads/n...stom-rtx-2080-ti-reviews.422723/#post-5579179

Hilbert Hagedoorn (Guru3d) said:
That's just a big can of nonsense (and I initially wrote another word there). NVIDIA always has tracked what media gets what AIB samples, period. You know who does that as well? AMD, they even regulate what brand and sample end up at what reviewer. How conveniently he forgets to mention that.

I think Kyle is letting his feelings getting to his better judgment.
 
Status
Not open for further replies.