Nvidia ,Rtx2080ti,2080,(2070 review is now live!) information thread. Reviews and prices

Page 15 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

realibrad

Lifer
Oct 18, 2013
12,337
898
126
What "This" are you referring to?

Pricing? Pricing is up because of the massive dies, more expensive GDDR 6 memory, that both cost more money, and surprise they pass these increases on to the consumer.

If we were living in an alternate universe with more competitive AMD, you still wouldn't be getting these cards cheaper, you would be getting different cards without RT, and smaller dies cheaper.

This as the situation which is the combination of what was released and the price it was released at. These cards seemed to have been positioned to not really compete with existing products.

It could be that these have much lower performance than Nvidia had hoped, but, I think its more likely that they have increased price to help segment these cards so they do not eat away at the existing stock of old chips.
 
  • Like
Reactions: ub4ty

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
It could be that these have much lower performance than Nvidia had hoped, but, I think its more likely that they have increased price to help segment these cards so they do not eat away at the existing stock of old chips.

No, they cost more to produce. This isn't even something to speculate on. This is a fact. More production costs = higher selling price. That is just reality.

You don't have to come up with odd theories about keeping prices high to clear old stock, when there is a clear cut reason for the high prices.

If they ran out of 1080Tis tomorrow, do you think the prices would drop for RTX cards? Dream on...
 

realibrad

Lifer
Oct 18, 2013
12,337
898
126
No, they cost more to produce. This isn't even something to speculate on. This is a fact. More production costs = higher selling price. That is just reality.

You don't have to come up with odd theories about keeping prices high to clear old stock, when there is a clear cut reason for the high prices.

If they ran out of 1080Tis tomorrow, do you think the prices would drop for RTX cards? Dream on...

Are you saying that the cost of the new chips is perfectly inline with the price of new chips, so that when compared to the previous the margins are the same?

That would be interesting.
 
  • Like
Reactions: Headfoot

Ottonomous

Senior member
May 15, 2014
559
292
136
No, they cost more to produce. This isn't even something to speculate on. This is a fact. More production costs = higher selling price. That is just reality.

You don't have to come up with odd theories about keeping prices high to clear old stock, when there is a clear cut reason for the high prices.

If they ran out of 1080Tis tomorrow, do you think the prices would drop for RTX cards? Dream on...
So we're to assume that nvidia is in no way responsible for the executive decision to incorporate RTX, on a node that wasn't ready for it? Pricing is important to consumers, not the need to shoulder production costs to help nvidia
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
So we're to assume that nvidia is in no way responsible for the executive decision to incorporate RTX, on a node that wasn't ready for it? Pricing is important to consumers, not the need to shoulder production costs to help nvidia

So we should never get new forward looking features that use die space?

Because whenever anyone tried to add a significant forward looking features, that use much die space, you are going to see the the negative impact on perf/$ on older games. If you think manufactures are just going eat increased production costs, you aren't being anywhere near realistic.

Serious feature progress has a cost.

I'd rather that we have that progress, than conservatively keep doing the same thing over and over.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
What "This" are you referring to?

Pricing? Pricing is up because of the massive dies, more expensive GDDR 6 memory, that both cost more money, and surprise they pass these increases on to the consumer.

If we were living in an alternate universe with more competitive AMD, you still wouldn't be getting these cards cheaper, you would be getting different cards without RT, and smaller dies cheaper.

If AMD had a secret Ace up its sleeve and released better performing and better priced cards that starting taking market share then Nvidia would lower prices.

Nvidia marigns increase every year and have for a while. You're right the bigger die sizes are the reason for higher prices, because without competition Nvidia wants to maintain/increase margins with each new gen. However, you act as if the 1000 series was barely scraping by with low margins, and that there's no way Nvidia could afford to lower prices. They could afford to, there's just no reason.

2080 probably has comparable margins to 1080 Ti. +$100 for a larger chip. I'd also guess 8GB of GDDR6 is cheaper than 11GB of GDDR5X. G5X was never high volume production compared to mainstream GDDR. Considering 2017 was a year of record margins for Nvidia, 2080 could go down in price by quite a bit and still sell for profit - just less of it.

2080 Ti at $1200 likely improved margins. In a cutthroat competition enviroment, costs could go down by hundreds of dollars and they'd still make margins, just much less.

TLDR; you're right die sizes are the primary reason for price hikes. But stop acting like they are barely selling this as profit and that there is no way price could possibly be lower if we had competition taking away Nvidia market share.
 

Ottonomous

Senior member
May 15, 2014
559
292
136
So we should never get new forward looking features that use die space?

Because whenever anyone tried to add a significant forward looking features, that use much die space, you are going to see the the negative impact on perf/$ on older games. If you think manufactures are just going eat increased production costs, you aren't being anywhere near realistic.

Serious feature progress has a cost.

I'd rather that we have that progress, than conservatively keep doing the same thing over and over.
A cost that doesn't remove accountability for a company implementing it, something you've made abundantly clear since day 1 that it should. Innovation also includes the cost-effectiveness and sophistication of the product.

Its also very suspicious timing, right after mining redefined the pricing landscape. And you assume ray-tracing is already set in stone as a viable product to invest 1200/800 dollars alongside a small perf bump in.

At least present a semblance of impartiality or consumer awareness
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
A cost that doesn't remove accountability for a company implementing it, something you've made abundantly clear since day 1 that it should. Innovation also includes the cost-effectiveness and sophistication of the product.

What accountability? How are NVidia accountable? These are luxury products, not government services.

If AMD had a secret Ace up its sleeve and released better performing and better priced cards that starting taking market share then Nvidia would lower prices.

Nvidia marigns increase every year and have for a while. You're right the bigger die sizes are the reason for higher prices, because without competition Nvidia wants to maintain/increase margins with each new gen. However, you act as if the 1000 series was barely scraping by with low margins, and that there's no way Nvidia could afford to lower prices. They could afford to, there's just no reason.

No I don't. I act, like you shouldn't expect companies to lower margins to make you happy.

They have very healthy gross profit margins. Though their operating/net margins aren't that large (until mining blew them up). Their strong GPU margins fund extensive R&D.


2080 probably has comparable margins to 1080 Ti. +$100 for a larger chip. I'd also guess 8GB of GDDR6 is cheaper than 11GB of GDDR5X. G5X was never high volume production compared to mainstream GDDR. Considering 2017 was a year of record margins for Nvidia, 2080 could go down in price by quite a bit and still sell for profit - just less of it.

2080 Ti at $1200 likely improved margins. In a cutthroat competition enviroment, costs could go down by hundreds of dollars and they'd still make margins, just much less.

I agree that margins are likely similar between 2080/1080Ti. Though margins on 2080Ti, are certainly much LESS than they were for Titan X/XP, which it replaces in the lineup. 2017 Margins were record largely because of Mining. I expect lower margins in 2018 despite the higher prices.

In a cutthroat competitive environment, they never would have built a consumer product with a 754mm2 die.
 
Last edited:

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
If I was looking at spending $700 for a 1080Ti I would certainly consider spending the extra $100 for the same performance plus a bunch of forward looking features.
One of my issues with the 2080 and the extra features is that no one knows how well they will perform. Since Nvidia decided to rush everything out as soon as these chips were made, there's zero RTX implementations out there for anyone to compare. The star wars demo shown on a 2080 isn't useful for anything to do with games and the only other demos were at Gamescon with a couple of titles that some people could play, which barely performed well on a 2080ti.

The used market is a huge consideration as well. You can pickup an AIB 1080ti for $300 less that has a couple years warranty left on it.
 
Last edited:
  • Like
Reactions: ub4ty and psolord

Ottonomous

Senior member
May 15, 2014
559
292
136
What accountability? They are the government. How are NVidia accountable?.
Accountable as in they deserve healthy skepticism from consumers, and the financial consequences, for the decision - to enforce RTX - and new prices - on the newer generation cards, taking advantage of poor competition and the mining crisis

Questions arise, is this a simple transfer of compute capabilities to boost consumer prices? Why couldn't they have made them on dedicated cards? Why not partition GTX and RTX? Why not wait till a significant node cadence (7nm) to produce a lot of more affordable options? Why assume that the consumer has a vested interest, not in a generational improvement, but in another gimmicky set of technologies from a company which is known to wield them against competitors?

At the end of the day executives at nvidia approved this, ray-tracing isn't divine providence that we should all sing and dance To, and self-flagellate to for the manufacturing costs
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,059
15,994
136
But the 2080 has more efficient CUDAs, so it is really hard to tell, which way untested applications will swing:
100948.png


Then there is the new frontier of Tensor AI application that will open up...

I bet you won't make it much more than 6 months before you get an RTX card to play with.
The problem is money. 20% more power for 200% cost. No way,. I just got 2 1080TI FTW3 (6696) for $719 each. The same EVGA chip in a 2080TI is $1349.

Not going to be in my future anytime soon.
 
Last edited:

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
The problem is money. 20% more power for 200% cost. No way,. I just got 2 1080TI FTW3 (6696) for $719 each. The same EVGA chip in a 2080TI is $1349.

Not going to be in my future anytime soon.

Or you could have purchased 2080s for about $800. 11% more, with about 7% more F@H performance, nearly a wash today, with many forward looking enhancements as a bonus.
 

Despoiler

Golden Member
Nov 10, 2007
1,968
773
136
So we should never get new forward looking features that use die space?

Because whenever anyone tried to add a significant forward looking features, that use much die space, you are going to see the the negative impact on perf/$ on older games. If you think manufactures are just going eat increased production costs, you aren't being anywhere near realistic.

Serious feature progress has a cost.

I'd rather that we have that progress, than conservatively keep doing the same thing over and over.

The forward looking features are designed for a Quadro card. They want professional people to have access to them and gamers to pay for them. It's the same thing AMD did with Vega. Price of cards withstanding of course.. The other issue is that ray tracing DOES NOT require any special hardware. DXR and Vulkan use a compute based RT framework. We are going to see how necessary RT cores are when the APIs and benchmarks are officially released. DXR in Octoberish last I read. AMD will fare quite well given their architecture is compute focused. DLSS is a feature Nvidia had to invent a gaming use case for. I'm sure it's going to prove to be less forward looking as soon as people can play an actual game with it. The initial mode's flaw is it undersamples high frequency data. Nyquist is unhappy.
 

jpiniero

Lifer
Oct 1, 2010
16,392
6,866
136
^^

Somebody benched the Star Wars RT demo and was getting 4 fps at 1440p on the 1080 Ti whereas the 2080 was getting like 30ish. I'm sure nVidia didn't do much optimizing for the fallback but I think it's reasonable at this point to assume that you won't really bother using RT without dedicated hardware.
 
  • Like
Reactions: PeterScott

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
The other issue is that ray tracing DOES NOT require any special hardware. DXR and Vulkan use a compute based RT framework. We are going to see how necessary RT cores are when the APIs and benchmarks are officially released.
Yeah sure, raytracing can be done on an 80286 if you want, doesn't mean it's useful for realtime. The RT cores in Turing perform very fast BVH lookups, which is where the main acceleration is gained for raytracing. Combined with limiting to small numbers of rays per pixels utilizing the Tensor cores to assist with denoising that only a vastly larger amount of rays per pixel could solve, that's 2 very specific hardware accelerated purposes that general function compute cores won't be able to match. Will AMD compute cores be faster than Pascal using DXR? Very likely and probably a significant margin. Faster than dedicated hardware solving two important issues with raytracing? Highly doubtful it will be close.
 

Despoiler

Golden Member
Nov 10, 2007
1,968
773
136
Yeah sure, raytracing can be done on an 80286 if you want, doesn't mean it's useful for realtime. The RT cores in Turing perform very fast BVH lookups, which is where the main acceleration is gained for raytracing. Combined with limiting to small numbers of rays per pixels utilizing the Tensor cores to assist with denoising that only a vastly larger amount of rays per pixel could solve, that's 2 very specific hardware accelerated purposes that general function compute cores won't be able to match. Will AMD compute cores be faster than Pascal using DXR? Very likely and probably a significant margin. Faster than dedicated hardware solving two important issues with raytracing? Highly doubtful it will be close.

Anyone buying these cards thinking they are sliced bread better understand Nvidia is doing what they always do. They are attempting to shape perception that their way is the best and only way. Silly implementations and locked in is what they do though. This entire launch has been about obfuscating reality with "the new stuff around the corner." No one has a complete understanding of what the reality of RT performance actually is. We don't even have v1.0 APIs yet. There are some clues out there and it doesn't make me think previous generations from either company are simply untenable like Nvidia makes it seem.

Check the below tweets. Real deal dev with real numbers.

https://twitter.com/sebaaltonen/status/1032283494670577664?lang=en
Claybook ray-traces at 4.88 Gigarays/s on AMD Vega 64. Primary RT pass. Shadow rays are slightly slower. 1 GB volumetric scene. 4K runs at 60 fps. Runs even faster on my Titan X. And with temporal upsampling even mid tier cards render 4K at almost native quality.

https://twitter.com/sebaaltonen/status/976132464337764352?lang=en
Claybook is ray-tracing at 60 fps on 1.3 Tflop Xbox One :)
 

Hitman928

Diamond Member
Apr 15, 2012
6,640
12,238
136
^^

Somebody benched the Star Wars RT demo and was getting 4 fps at 1440p on the 1080 Ti whereas the 2080 was getting like 30ish. I'm sure nVidia didn't do much optimizing for the fallback but I think it's reasonable at this point to assume that you won't really bother using RT without dedicated hardware.

It was oc3d.net. It should be noted that the framerates in the graph are MAX frames, not average. They list minimums on their page as well, but no averages. I don't see any mention of a 1080 Ti. Also interesting to note that overclocking did nothing for performance on the 2080 Ti and is possibly within normal variance on the 2080 (possible power limit?).

https://www.overclock3d.net/reviews/gpu_displays/nvidia_rtx_2080_and_rtx_2080_ti_review/31
17181418293l.jpg
 

Hitman928

Diamond Member
Apr 15, 2012
6,640
12,238
136
https://www.youtube.com/watch?v=iSGpDMK6xxQ&feature=youtu.be

This was what I was referring to, apparently it was 4K DLSS although you can't tell what the 1080 Ti used in lieu of the DLSS. So perhaps the comparison isn't quite valid but still reasonable to assume that dedicated RT hardware is going to be necessary if you use RT.

Thanks, hadn't seen that one. Like you said, without knowing what 1080 Ti used for AA, it's hard to tell how much faster 2080 really is, though I would expect it to be a large gap either way.

I'm not convinced a dedicated RT block is necessary yet, there's still lots of ongoing research into real time RT techniques and the possibility of incorporating new functions into the standard graphics pipeline. Will be interested to see how it unfolds.
 

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
Anyone buying these cards thinking they are sliced bread better understand Nvidia is doing what they always do. They are attempting to shape perception that their way is the best and only way. Silly implementations and locked in is what they do though. This entire launch has been about obfuscating reality with "the new stuff around the corner." No one has a complete understanding of what the reality of RT performance actually is. We don't even have v1.0 APIs yet. There are some clues out there and it doesn't make me think previous generations from either company are simply untenable like Nvidia makes it seem.
No one in their right mind should be buying these GPUs for their RTX features. As you said we have zero performance comparisons. It's what makes the 2080 a completely worthless GPU at the moment. At least the 2080ti is a significant performance increase over 1080ti for gaming so if you want to blow the money then great. They are gods at marketing in the GPU market (consumer and commercial) and will try to sell you anything.

That doesn't mean what they're doing isn't real, that their supposed 10 years of research culminating in Turing isn't actually a smart and efficient approach to hybrid rendering. The alternative means that developers haven't even tried hybrid rendering and will somehow discover once DXR is available that magically Vega and Pascal can raytrace much much faster than they thought using compute. That line of thinking is definitely not reality.

What sebbi is doing in Claybook is quite different than conventional ray tracing, I wouldn't take his numbers as directly comparable to Nvidia's own metrics. There's more info on that at B3D if you're interested.

Note that I'm personally very skeptical of anything Nvidia does. I just don't discount their tech as nonsense until we have a lot more info to go on (ie. actual games and DXR available).
 
  • Like
Reactions: Muhammed

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
It was oc3d.net. It should be noted that the framerates in the graph are MAX frames, not average. They list minimums on their page as well, but no averages. I don't see any mention of a 1080 Ti. Also interesting to note that overclocking did nothing for performance on the 2080 Ti and is possibly within normal variance on the 2080 (possible power limit?).

https://www.overclock3d.net/reviews/gpu_displays/nvidia_rtx_2080_and_rtx_2080_ti_review/31
17181418293l.jpg
why is ti so much faster than 2080 here? it doesnt have double the rt cores?
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
^^

Somebody benched the Star Wars RT demo and was getting 4 fps at 1440p on the 1080 Ti whereas the 2080 was getting like 30ish. I'm sure nVidia didn't do much optimizing for the fallback but I think it's reasonable at this point to assume that you won't really bother using RT without dedicated hardware.
They didn't and it's likely that they purposely gimped it.
Pre-launch, I had discussions on other forums to try to get at the bottom of the claimed Gigaray/sec metrics.
Throughout such discussions, aspects of hardware/software were probed in detail.

In particular, a user grabbed DX12 and probed DXR fallback code.
In it, he found a handful of one line code changes that resulted in performance doubling and it had to do w/ the type of memory primitive that was used. BVH has been stated to be constructed in CUDA cores. RT cores only accelerate intersection tests.

Performance drops through the floor when you have reflections and ray divergence which is why they are toned down even on Geforce20 cards as discussed in the Battlefield gaming demo and others. Geforce20, as stated, is not Ray tracing ... It is hybrid Ray tracing. They still use CUDA cores for BVH construction, they still use the rasterizer pipeline heavily... Ray trace cores are only used for intersection testing and the result looks like a grainy mess.. From there, they use tensor cores heavily to apply various forms of filtering/interpolation to produce a sensible real-world image. Like the prior implementation, this one is just a series of hacks.. Better hacks but hacks nonetheless. Hacks that fall short still of RT ray tracing. So, it's literally a prototype platform.. RT ray tracing Version 1.0.

Something underserving of an initial buy outside of developers especially at a premium.


Yeah sure, raytracing can be done on an 80286 if you want, doesn't mean it's useful for realtime. The RT cores in Turing perform very fast BVH lookups, which is where the main acceleration is gained for raytracing. Combined with limiting to small numbers of rays per pixels utilizing the Tensor cores to assist with denoising that only a vastly larger amount of rays per pixel could solve, that's 2 very specific hardware accelerated purposes that general function compute cores won't be able to match. Will AMD compute cores be faster than Pascal using DXR? Very likely and probably a significant margin. Faster than dedicated hardware solving two important issues with raytracing? Highly doubtful it will be close.
Appeal to extremes are unnecessary.
30FPS is as useless as 10FPS.
RT cores perform intersection tests
BVH is constructed in CUDA cores just like it is today.
Tensor cores help mask how crappy the end product is.

Essentially hacks atop the core functionality that is performed in CUDA cores.
Of the 2 new specific hardware accelerated portions of the cards .. One accelerates intersection testing.. The other masks how incapable the Ray Tracing methodology is. All together known as Hybrid Ray tracing. A series of hacks just like the prior version.

Interesting because this is one part software and another part hardware.
As the hardware is set upon a hacky implementation it is anyone's game as to how to exploit it in software.
As for that, it seems, when you focus moreso on the approach/software, you can get incredible results :


It was oc3d.net. It should be noted that the framerates in the graph are MAX frames, not average. They list minimums on their page as well, but no averages. I don't see any mention of a 1080 Ti. Also interesting to note that overclocking did nothing for performance on the 2080 Ti and is possibly within normal variance on the 2080 (possible power limit?).
https://www.overclock3d.net/reviews/gpu_displays/nvidia_rtx_2080_and_rtx_2080_ti_review/31
While the 2080 is still an absurd $800. It would be considered the absolute max for most and it is a turd for real time ray tracing.


- Hybrid Real time ray tracing : A new series of hacks to try to produce results to the end user that mimic ray tracing
Barely able to achieve FPS people equate with performance thus it will be tuned down significantly to a point that wont be much different than existing cards
- DLSS : Another series of hacks to try to feign performance that doesn't exist
- Existing HW : BVH construction is a significant part of Real time ray tracing and is done in CUDA cores both on RTX cards and non RTX cards. Existing HW doesn't have RT cores to accelerate intersection tests. However, It is already clear that ray performance per pixel is limited by this new hardware such that the bulk of the visual magic is done by tensor core approximations and hacks.

A more mature architecture is one in which BVH construction is accelerated much more.
RT cores are broken out into a separate MCM
You have far less meme learning approximate image generation via tensor cores.

Beta prototype hardware at a premium.
It's as a user said.. This is really meant for developers/Quadro cards but to make the sale there they had to produce end users thus why this is in a new line of Geforce cards. Also, due to mining setting a new price precedence and the clearly vocal group of people who will buy their wares no matter the cost, Nvidia jacked the price. Lastly, they didn't want to compete against the huge inventory of Pascal they had to sell. So, win/win across the board.

Really is a dead conversation from this point.
The compute performance is interesting but not at these prices and not due to having to wade into a whole new paradigm of beta hardware to find potential tweaks especially with 7nm right around the corner and the power utilization off the charts. For gaming, the 1% or less will buy this no matter what.
 
Last edited:

moonbogg

Lifer
Jan 8, 2011
10,731
3,440
136
These cards are a huge pile of Bantha crap. Overpriced, underperforming and key features are missing. They want everyone to say, "Screw you Nvidia! I'm buying a 1080Ti instead!" so we can absorb their bad bet on the mining demand. Hot, steaming and fresh. Bantha. Bantha.
 

arandomguy

Senior member
Sep 3, 2013
556
183
116
What "This" are you referring to?

Pricing? Pricing is up because of the massive dies, more expensive GDDR 6 memory, that both cost more money, and surprise they pass these increases on to the consumer.

If we were living in an alternate universe with more competitive AMD, you still wouldn't be getting these cards cheaper, you would be getting different cards without RT, and smaller dies cheaper.

I feel you way understate how much flexibility they have with these products relative to cost of production.

Nvidia cut the price of the GTX 780 by $150 5 months after launch. It was just coincidence that costs came down the exact amount needed and the right time frame to price match against R9 290x. Or maybe the miracle Nvidia worked by managing to cut enough costs in a few weeks to enable $150 price drop on the GTX 280 and $100 price drop on the GTX 260.

Actually a better example of why these things aren't priced to cost is with AMD, the "poor" company supposedly with razer thin margins and no wiggle room. AMD's Fury Nano launched at $650. It was price cut a mere 4 months post launch to $500. Meanwhile Fury X launched at $650 and remained at $650 MSRP. The difference between the two cards is the PCB and heatsink. I guess AMD worked some selective magic here and reduced the production costs of Fury Nano's PCB and heatsink so the price could go down.

These are all luxury goods and not commodities. They are priced based on what they feel will generate the most income business wise with considerations of overall business strategy. Cost does not dictate the pricing for these technology products.
 
  • Like
Reactions: krumme and ub4ty