Nvidia ,Rtx2080ti,2080,(2070 review is now live!) information thread. Reviews and prices

Page 46 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ub4ty

Senior member
Jun 21, 2017
749
898
96
You act like you're some great representative of the masses of v&g yet you are pretty new here. You put words in my mouth every post. Talking to you is pointless. We're done here.
Nope. You're simply over pronouncing your presence and I called you out for it :
https://store.steampowered.com/hwsurvey/videocard

In the grand scheme of things, nothing more than an over-pronounced vocal minority.
The lion's share of gaming/hardware enthusiast are nowhere near the high end mainly because you dont need it to have a wonderful experience and the price/value ratios are absurd. There's nothing to argue about here because its simply a fact.

You put words in my mouth every post. Talking to you is pointless. We're done here.
This is you right? :
You are in the wrong forum pal with your love of pleb resolution, framerate and experience.
Indeed we are done
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,996
126
Where are you getting this is "fully reflected"? It's just a projection, not even real time. Easy to do when you don't have any actual dynamic lighting to work with and just procedural textures. Even 2D games had this kind of "reflections" back then.
It's reflecting everything the game produces which is what I mean by "fully". There's dynamic light from the forcefield and when you fire guns. That's reflected too.

Did you even stop and think about how today's Unreal Engine 4 handles "reflections"? "The Reflection Environment works by capturing the static scene at many points and reprojecting it onto simple shapes like spheres in reflections. Artists choose the capture points by placing ReflectionCapture Actors. Reflections update in realtime during editing to aid in placement but are static at runtime. " Even today's modern Unreal engine isn't "fully reflected".
We already had good approximated reflections two decades ago that ran fine on common hardware. Nobody asked for the overpriced garbage failure that is RTX.
 
  • Like
Reactions: UsandThem

alcoholbob

Diamond Member
May 24, 2005
6,338
404
126
That bf5 rtx perf even at lowest setting is a unmitigated disaster.

Little to no visual benefit
Performance is so pathetic its not usable
Dx12 version that rtx requires is still a stuttering mess in bf

Worst showcase ever. Everything went wrong.

Even if they invented a ultra low setting (who cares anyway one can say) it wouldnt run fast enough and dx12 would still stutter.

Good i cancelled my 2080 and 2080ti a month ago or so. These results would have made me explode. Lol.

Nvidia never would have put RT cores on the die just for the gaming market. This is purely a play for the render farm market. The fact that the hardware can be used for gaming as an afterthought is "cool" and can be used to check off a check box. It'll be years before this will have any use for games, it's like first generation tessellation hardware. You aren't buying it for gaming features, you are paying to help Nvidia amortize their research costs for a higher margin market. A die size this big isn't even a gaming chip, it's just for Tesla/Quadro rejects to generate some revenue.
 

Pandamonia

Senior member
Jun 13, 2013
433
49
91
Fake ray tracing is likely easy to render and looks almost as good and probably doesn't cost much performance.

I can't see amd ever adopting this tech and if amd doesn't adopt then it won't take off. And I don't mean some half arsed attempt to support it in drivers and stick a sticker on the box.

Its clear you need a massive gpu to even do it badly and that rules amd out since they don't even compete at high end now on normal gpus
 
  • Like
Reactions: Krteq

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
Fair enough. But I expect that 15-20% (closer to 30% to be fair) to turn into 40-50% as CAS and DLSS get adopted and more games use HDR which hurts pascal. I'll take any increase I can get at 4k lol.
I am highly sceptical and bf5 will not use dlss. But sure it can't get worse. Running 4k here too so what alternative is there anyways. Man. Perhaps black Friday can help with the value.
 
  • Like
Reactions: n0x1ous

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
Nvidia never would have put RT cores on the die just for the gaming market. This is purely a play for the render farm market. The fact that the hardware can be used for gaming as an afterthought is "cool" and can be used to check off a check box. It'll be years before this will have any use for games, it's like first generation tessellation hardware. You aren't buying it for gaming features, you are paying to help Nvidia amortize their research costs for a higher margin market. A die size this big isn't even a gaming chip, it's just for Tesla/Quadro rejects to generate some revenue.
Yeaa certainly looks like that. But they are crazy expensive for what they deliver for consumers.
 

maddie

Diamond Member
Jul 18, 2010
5,002
5,194
136
Isn't this how it works?

Marketing, to make you want it soooo... bad.
Expensive, so more of your money becomes theirs.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Tensor cores should be counted together with the RT cores, neither is worth much without the other.
Tensor cores give you DLSS, which isn't ray tracing and would be useful on lower end cards that are too slow to use for ray tracing.
 

coercitiv

Diamond Member
Jan 24, 2014
6,792
14,831
136
Tensor cores give you DLSS, which isn't ray tracing and would be useful on lower end cards that are too slow to use for ray tracing.
How exactly do you imagine this would work on lower end cards? As of now, the tensor cores use 25% of Turing's total silicon budget. Unless DLSS requires only a fraction of the current tensor cores budget, a lower end card would have to give up 1/3 of it's shader area to make room for DLSS enabling hardware. That loss alone takes you one resolution notch down, essentially negating whatever DLSS brings to the table.

Even assuming DLSS can run properly with a lot less tensor cores (which would be great and a true win), this would only reinforce the argument that most of the tensor silicon area should be counted towards the budget required to enable RT.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Good to hear but I don't beliewe it.

Mantle was working for about 3 or 4 months or so in bf4 and only for 3gb cards. The rest of the time it stuttered. Bf1 and bf5 dx12 implementation is a mess and never worked at any time. They have had about 4 years to make the engine work.

I think we need a new gen of consoles for them to renew it properly.

Until then it doesn't matter if dice can fix rtx.

They said two months ago they would implement lower settings.
Now we have it.
It works in low. Very much so vs ultra. But performance is still useless.
We need 100% increase plus even for low setting!
It's not going to happen.

Now you can say you can just play without. Yes. No problem. Fine.

But then perf for aftermarket stock oc cards 1080ti vs 2080ti is like what 15-20% for game like bf5? For 1.5 year progress.

I hope amd and Intel get back or into this dgpu highend business. We need it.

I played BF4 with Mantle almost entirely once it released. I never really had any issues after they put out the first update for it (Before the first post-release mantle update mantle had some annoying glitches). But as far BF1, I never had stuttering, ever. And that was on an RX480. That game was incredibly well optimized and ran great. Same went for the Battlefront games.

But I do agree that RTX now is a pointless tech demo. It will be 5 years or so before its really usable IMO.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Tensor cores give you DLSS, which isn't ray tracing and would be useful on lower end cards that are too slow to use for ray tracing.

Its already been proven that DLSS is a sham. You literally get better visuals and nearly the same performance by just running the lower resolution natively. Since DLSS @4K drops your resolution down to 1440. And DLSS @1440 drops your resolution down to 1080. Its a total sales gimmick that has some bad visual side effects.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Quite a bit of info here on the background of the BF5 raytracing numbers. There may be a lot of room for improvement.
Be sure to read the interview Q and A about the performance problems and how they are working to fix them.

https://www.eurogamer.net/articles/digitalfoundry-2018-battlefield-5-rtx-ray-tracing-analysis

There are plenty of Battlefield 5 DXR performance benchmarks out there right now, and some of the numbers look low - but revised code is forthcoming that addresses a number of issues that should address the most egregious frame-rate drops. For example, all levels right now are affected by a bounding box bug making ray tracing more expensive than it should be due to the existence of destructible terrain. Certain 'fake' god ray effects or a certain type of foliage can also impact performance negatively, sending out far more rays than they should. It's difficult to get a lock on how much performance is hit by using DXR, as the computational load changes according to content - there is no flat cost here.
 

TestKing123

Senior member
Sep 9, 2007
204
15
81
It's reflecting everything the game produces which is what I mean by "fully". There's dynamic light from the forcefield and when you fire guns. That's reflected too.

There is ZERO dynamic lighting going on. It's all faked effects. Reflections are fake. There is not a single bit of calculation in the code to reflect any light, because there is NO LIGHT TO REFLECT. Please don't let your ignorance of this game engine (which is open source by the way) get in the way of your arguments against Turing.

We already had good approximated reflections two decades ago that ran fine on common hardware. Nobody asked for the overpriced garbage failure that is RTX.

Again, your ignorance of this game engine (which is open source by the way) is getting in the way of your arguments against Turing.

Ignore Turing for a second and just consider ray tracing itself. You're saying that fake projection from 20 years ago is just as good as real time tracing today. Absurd.
 

PrincessFrosty

Platinum Member
Feb 13, 2008
2,300
68
91
www.frostyhacks.blogspot.com
Maybe any Ray Tracing should be done on the CPU, and leave GPUs to do 3D work, to use those unused CPU cores. CPUs are actually not that terrible at it.

This may be an example where competition doesn't result in good. Nvidia wants a piece of Intel's pie, so they are pushing ray tracing. Less work for CPU means more GPU demand.

Again, Moore's Law slow death is going to bring about a change in this line of thinking or no one will benefit.



We had nearly free, enormous gains for decades. Now that's gone, so frantically everyone is trying to find a solution to keep up the pace of advancements. It will not come without sacrifices, unfortunately.

The difference between RTX and older changes like hardware T&L, and programmable vertex shaders is that in the old days you had enormous gains to be had from increasing TDPs and moving to a new process. Both of which are becoming a precious resource.

Delay over PCIe bus kills ideas like this, sadly. I mean Gabe was using the same criticism about Physx back in the day, on an external accelerator card, which it was originally designed for, before Nvidia bought the technology. If you want the physics to interact with game logic then the game logic has to fetch the state of the physX which is very slow. That's why most of the advanced PhysX features, the stuff that was too slow for the CPU and required the hardware acceleration, never interacted with the game logic. You could have a nice cloth flag which could blow in the wind and tear when shot but those cloth fragments couldn't interact with game logic in real time, you could for example do AI line of sight calculations with it, they'd just look straight through it.

Bsides games are still starving for more CPU cores, even with DX12 and better CPU usage, modern games can push an 8 core processor to its limits, especially if what you demand is a high frame rate. I suspect with CPUs being so general purpose that they'd be fairly slow at RT calculations anyway.

a secondary card dedicated to RTX would make more sense. Buyers that want it can purchase and the main GPU doesnt need silicon wasted on it. Its pretty, but getting turned off for performance on my system. If Dice were to implement multi-gpu in DX12 I would give it another go. No way am I dropping resolution for it.

The only reason RTX is possible at all is because it has hardware dedicated to doing RTX ops and that's only some small portion of the chip, like 1/3rd or so. This idea might work in a world where they create a 2nd chip and most of its transistors are used for RTX ops, and you combine 2 different cards, regular rendering card + RT card, but not in any kind of regular SLI 2 same cards kinda capacity. Funny really because if they did that it'd be like graphics has come full circle, in the good old days people used to have 2D video cards for rendering the desktop and 3D accelerators were secondary cards and you have this cute little 5" long VGA cable out the back which connected the output of one to the input of another. It'd be funny to see that come back.

Over-exaggeration. If it takes you 250ms to respond to a visual stimulu, 8ms (60hz to 120hz) isn't going to make that much a difference in the grand scheme of things.

What I'm saying is that's not an accurate representation of what is happening. It's too simplified to say that you see someone and you make 1 discreet aiming movement based on that which happens X many ms later. As I said it's a constant loop of perception of what's on the screen, your own mental processing of that perception, movement from your hands, input back into the game space and then update on the screen, completing the loop. That doesn't happen a single time when you aim it happens continuously at a rapid rate and so in any 250ms window the increased latency of that loop will be a detriment.

It's less important for the kinda CSGO awpers who just make fast predictive snap aims, it's way more important for those people who have slower and more deliberate aims.
 
  • Like
Reactions: ozzy702

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
I played BF4 with Mantle almost entirely once it released. I never really had any issues after they put out the first update for it (Before the first post-release mantle update mantle had some annoying glitches). But as far BF1, I never had stuttering, ever. And that was on an RX480. That game was incredibly well optimized and ran great. Same went for the Battlefront games.

But I do agree that RTX now is a pointless tech demo. It will be 5 years or so before its really usable IMO.
So you played bf1 on dx12 with a rx480 without stutter?
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
How exactly do you imagine this would work on lower end cards? As of now, the tensor cores use 25% of Turing's total silicon budget. Unless DLSS requires only a fraction of the current tensor cores budget, a lower end card would have to give up 1/3 of it's shader area to make room for DLSS enabling hardware. That loss alone takes you one resolution notch down, essentially negating whatever DLSS brings to the table.

Even assuming DLSS can run properly with a lot less tensor cores (which would be great and a true win), this would only reinforce the argument that most of the tensor silicon area should be counted towards the budget required to enable RT.
The problem with ray tracing is it halves your fps right now, so no good for low end cards, where as DLSS is meant to give a free-ish resolution jump which is very good for lower end cards. If we assume that the lower end card user has a 1080p screen (>90% of monitors in steam survey are that or lower) then we don't need the same tensor core performance as for a 4k screen which is 4 times the size (DLSS is an image comparison so image size will directly impact performance). Hence it may well be the silicon budget is not the same.
 

tajoh111

Senior member
Mar 28, 2005
320
344
136
The 1080ti was a horrible value and so is the 2080ti (even moreso) which is why only 1-3% own it. It's not hard for me to understand as I work in the industry and sit in product meetings where price segments are decided. The high end is most always garbage value... and the consumers who buy it sold on marketing moreso than anything else. You'll argue different and the marketing/business groups will laugh at you.


97% of the market disagrees with you and that doesn't even include the massive amount of people playing on consoles w/ far less graphics capabilities. You spent lots of money on hardware. Congrats, the broader market doesn't care.

It's amazing what type of reality the 1-3% subscribe to...

The GTX 1080 ti was the value at the high end and if people voted, most would agree it is the best card for this generation. It's the only card that atleast matched and likely exceeded most peoples expectations in terms of performance and price.

The reason that 1 to 3% of the population own it is simply because people don't want to spend 700 dollars on a graphics card but the performance increase relative to lower end cards was close to linear at 4k.

Most people were surprised at the pricing being $699 considering the lack of competition and the performance jump over a gtx 1080.

Price to performance has to decrease when comparing the highest end card to the lower end cards. Otherwise you get product cannibalization of lower end cards. The gtx 1080 ti definitely ate into gtx 1080 sales and definitely ate into RX vega sales as well(outside the mining crunch). The fact that it is the only card out of stock right now in the market place shows that at 699 and for the performance it offered, it offered pretty good value for a flagship.

If the GTX 1080 ti was say $500, it was drive the price of everything else below it down to the gtx 1050 to the rx 460. A gtx 1080 would need to be 330, a gtx 1070 would have to be 229, a gtx 1060 would have to be 140 and so on. Not only would this kill almost the entirety of Nvidia profit in the consumer graphic space with basically all the profit coming from the professional side, it would drive AMD out of the market place entirely.

The RX vega 64 would have to be priced at 330 dollars, rx 56 $250 and the rx480/580 would need to be priced at 120. All of these are loss prices.

You know that it is illegal to price your products at unsustainable levels to manipulate the market to drive your competition out of business?

It's called predatory pricing.

The GTX 1080 ti was priced perfectly relative to the competition and was a fair price for consumers. It definitely deflated/delayed the Vega launch.
 

coercitiv

Diamond Member
Jan 24, 2014
6,792
14,831
136
DLSS is meant to give a free-ish resolution jump which is very good for lower end cards. If we assume that the lower end card user has a 1080p screen (>90% of monitors in steam survey are that or lower) then we don't need the same tensor core performance as for a 4k screen which is 4 times the size (DLSS is an image comparison so image size will directly impact performance). Hence it may well be the silicon budget is not the same.
This hypothesis still supports the idea that most of the tensor cores in current Turing chips are used mainly for RT noise filtering, which was my original point.

I look forward to seeing the first real-world implementation of DLSS, with the ability to properly compare it's gains versus various other tehniques & resolutions.
 

beginner99

Diamond Member
Jun 2, 2009
5,239
1,617
136
The majority of your performance comes down to you.

Nobody claimed otherwise. What I claim is that all this equal lower latency and more fluid display to aim matters.

A player has 20ms latency.. another has 80ms. What happens to all of your thousand dollar gimmicks in such a case? It's a game after-all ... Jesus

Already explained that in my previous answer which you simply ignored. latency is handled by the "net code" of the game. It differs from game to game and is better in some, worse in others. So the code isn't perfect and hence lower latency is of course better but not the full 60ms better.

There's 10-30ms on keyboard or mice. Add even more if you are hooked into a cheapo usb hub. Add more if wireless.. There's teen level latency all over the place and its insignificant in the grand scheme of things. You can probe this with a USB protocol analyzer.

And? Every player has this latency and you contradict yourself. According to above statement it's worth it to buy a better mouse with lower latency contradicting your point spending money on "gaming gear" is wasted. And yeah I agree that the mouse matters more than the display.

No one is making money off of winning except for pros. You can by a 9900k/sli two 2080tis 144hz monitor and I guarantee you I can find a slew of people with 1050ti/ancient quad core on DDR3 that can kick your butt. If you wanna blow tons of money on hardware, by all means do it.. It's your money.

And? Tuning your car so it has 50hp more doesn't make you any money too and costs an arm an leg. Going on a diving vacation costs an arm and leg and so forth. People have hobbies they spent stupid money on for no other benefit than enjoyment. ( I say this owing an "ancient" quad with ddr3)

But don't pretend like it makes you an elite gaming enthusiast

Nobody pretends that. I'm only offering anecdotal evidence that the new displayed imporved my kdr (also the new mouse). If you believe it or if it matters to you is up to you.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
So you played bf1 on dx12 with a rx480 without stutter?

I did. But I do know the forums are full of people with stuttering issues. And others say it is fine. I am unsure as to what causes it for some and not others. I know with BF5 there was a bunch of DX12 stuttering which seems to have gone away after the RTX patches came out.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
Nobody claimed otherwise. What I claim is that all this equal lower latency and more fluid display to aim matters.



Already explained that in my previous answer which you simply ignored. latency is handled by the "net code" of the game. It differs from game to game and is better in some, worse in others. So the code isn't perfect and hence lower latency is of course better but not the full 60ms better.



And? Every player has this latency and you contradict yourself. According to above statement it's worth it to buy a better mouse with lower latency contradicting your point spending money on "gaming gear" is wasted. And yeah I agree that the mouse matters more than the display.



And? Tuning your car so it has 50hp more doesn't make you any money too and costs an arm an leg. Going on a diving vacation costs an arm and leg and so forth. People have hobbies they spent stupid money on for no other benefit than enjoyment. ( I say this owing an "ancient" quad with ddr3)



Nobody pretends that. I'm only offering anecdotal evidence that the new displayed imporved my kdr (also the new mouse). If you believe it or if it matters to you is up to you.
You're forming arguments without data which is why you have no argument at all. I highly doubt you even knew it takes a whopping 250ms to respond to visual stimuli. If you did, there would be nothing to argue about.

You agree that performance comes down to (you) and that factor is orders of magnitude more significant than hardware capability. This was my argument and it was justified with data. As you agree, there is nothing to argue further. Netcode doesn't negate network latency impact. It simply hides a portion via interpolation. Whose ever shot gets to the server first is what gets rendered outward. So, if someone has 20ms ping and you have 80ms ping, netcode is not going to mask 60ms worth of latency (you got one game tick max of latency hiding). This is why they have tournaments on the same network using same machines and latency.

I provided a general framing and provide specific details in support of it because I broadly am aware of how all of this ties together. The issue with these pick apart retorts is that you're simply looking for a slither of a hole to try to claim that I'm broadly wrong. I'm not because you agreed with my overarching framing : You are the most important factor. I'm correct because the data is fact. The difference between a walmart tier $8.99 mouse and a $19/$29 mouse that has the specs you refer to is $10/$20. Please tell me the difference in price between a gaming monitor and a non gaming one and a GPU to exploit it. Apple and Orange difference... Diminishing returns w/ a magnitude difference in cost.

You are more likely to have better performance from that $10/$20 mouse upgrade than the monitor/GPU because it takes you 250ms to respond to visual stimuli and that expensive gpu/mon upgrade only gives you frames 8-10ms faster. If you get that visual stimuli 8ms faster, nothing of value changes my friend. I didn't contradict a single thing. Please stop trivializing an overarching point that you just agreed with.

There's nothing to argue against when you agree with the conclusion and the supporting statements are factual.

That new gpu/monitor feel like butter? Yeah, it did for me when I test ran such a setup. I didn't like the picture quality, returned it.. Guess what? I didn't miss it once its was gone. My 60hz setup felt just as smooth. The mind is an interesting thing. All the anecdotes and subjective feels doesn't change your response time of 250ms. Imagine that.
 
  • Like
Reactions: catonkatonk

beginner99

Diamond Member
Jun 2, 2009
5,239
1,617
136
I highly doubt you even knew it takes a whopping 250ms to respond to visual stimuli. If you did, there would be nothing to argue about.

Let's make it simple. I suggest a game where we get to see a stimulus and who reacts first wins $100 every time. The catch is, I get to see the stimulus 5ms earlier than you. Every single time. Do you want to play?
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,996
126
[H]'s replacement 2080TI (Samsung RAM) failed. That's two out of three cards: https://www.hardocp.com/article/2018/11/21/rtx_2080_ti_fe_escapes_testing_by_dying_after_8_hours/

There is ZERO dynamic lighting going on. It's all faked effects. Reflections are fake. There is not a single bit of calculation in the code to reflect any light, because there is NO LIGHT TO REFLECT. Please don't let your ignorance of this game engine (which is open source by the way) get in the way of your arguments against Turing.
Ray tracing is fake as well. It's an approximation of light projected onto a device with discrete pixels. Real life doesn't work on pixels. What's your point exactly?

And where'd the bridge go with RTX "Ultra"? https://youtu.be/jaUP4LucmZM?t=929

Again, your ignorance of this game engine (which is open source by the way) is getting in the way of your arguments against Turing.
Unreal 1 isn't open source. Oldunreal has exclusive rights to work on unofficial patches but they're not allowed to distribute the code.

Ignore Turing for a second and just consider ray tracing itself. You're saying that fake projection from 20 years ago is just as good as real time tracing today. Absurd.
No, I'm saying RTX is the most garbage implementation of ray tracing possible, and looks worse than approximated reflections done 20 years ago on a $99 Voodoo3. A noisy ugly slideshow running on a $1200 graphics card.

Add the glorified upscaler ("deeeeeep learning") and the faulty cards and you have a winner. nVidia shareholders in this thread are clearly having a tough time.