[ H ]: BF5 Raytracing VRAM easily exceeds 6GB

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

krumme

Diamond Member
Oct 9, 2009
5,786
172
136
#51
This certainly would be most people,myself included at least if i had a 2060. I have been accustomed to a 60+ fps BF experience since my E6750+8800gts 512mb back in 2007.Kind of hurt though on some cards having to disable hbao or lowering effects but in intense fire fights all i care about is killing the enemy.Casual gamers will prob max the game out,deal with the dips then find warrant in a upgrade later on and why not?

Raytracing in BF5 mind as well be the next Crysis benchmark,lovely to look at and believe me every gpu upgrade from the 8800GTS512MB right up to the 1070ti i benched that game but at the end of the day a few settings will be tuned down.
I think even casual gamers will game at max for 3 hrs. Get irritated in the intense fights because they are dominated due to 40fps and then lower settings. After 30 seconds they forget they run on lower settings and just game. Perhaps a few will get into the settings later on by chance and to their horror experience they are running medium. Well. Perhaps they will upgrade the card then.

2060 is not different from 1070 in bf. Same card and functionality 100%. Perfect 1080. But no RT.
 

ewite12

Junior Member
Oct 9, 2015
12
0
41
#52
Yes, that's entitlement. You are not OWED anything by these companies. No one is. This is the part I feel that has been aggravating the crap out of me lately, mind you not just at ATF. You come off more as a snowflake crying you didn't get your precious price / perf metric, and yes I probably sound like a "shill" but I'm tired of reading "I didn't get X because Y reason." And the cherry to me is, just like the snowflakes I spit coffee at as I read their articles in the morning, if someone doesn't agree with you they must be a "dumb consumer."
:rolleyes:
It goes both ways NVIDIA isn't entitled to their products selling either. A 2GB $450 card by that example just isn't going to sell!
 

railven

Diamond Member
Mar 25, 2010
6,537
239
126
#53
It goes both ways NVIDIA isn't entitled to their products selling either. A 2GB $450 card by that example just isn't going to sell!
Well, yeah. And it's showing. One thing I love about markets, they won't support EVERY stupid move you do, regardless if you're essentially a monopoly.
 
Oct 27, 2006
19,794
302
126
#54
Well, yeah. And it's showing. One thing I love about markets, they won't support EVERY stupid move you do, regardless if you're essentially a monopoly.
Agreed.

It's fascinating to me because it's not even a case of Nvidia just recklessly throwing their weight around and trying to bully the market with fat margin highly profitable cards at extortion pricing, or really anywhere near that.

RTX dies are enormous, and contain a gigantic percentage of transistors dedicated to new feature sets, based upon their potential to offer significant value if the tech pays off in the real world. It's a nearly breathtaking risk really only made possible by the utterly inept competition in the area of recent generations. Although my very dim outlook on at least the near term 24-36 month range on these areas being very practical for gaming are well noted by now, I don't outright blame Nvidia for trying something new on a grand scale.

The 'safe' route could have easily been to not dedicate any die space to new tensor/RT, and instead make a conventional iterative improvement on Pascal. In that path, profitable cards with higher current gaming performance could have been made across many segments, and currently be selling by the barrelful. Eg; 70% the size of 2080ti, 140% the performance of 1080ti, $699, etc.

It's weird and altogether senseless to me, but safe or greedy it certainly isn't.
 
Mar 10, 2004
28,523
238
126
#55
Well, there only 72 RT cores for the 2080ti, but there are 576 Tensor cores.

The main job of all those Tensor cores is DLSS.
 
Oct 27, 2006
19,794
302
126
#56
Well, there only 72 RT cores for the 2080ti, but there are 576 Tensor cores.

The main job of all those Tensor cores is DLSS.
While this is very true, it may give a false sense of the scale of things.

On the RTX 2080ti die, 50% of the area is conventional (ie; what made up Pascal with a few minor tweaks), 25% is Tensor, 25% is RT. The further breakdown shows that each RT unit is massive compared to a single Tensor.

It gives a good idea on the cost of these features as well. 25% of the die would make for a 50% uptick in standard (non RT/Tensor) performance if given to traditional resources and not otherwise bottlenecked.

This would mean 200% of 1080ti rather than ~135ish% The gap would seem to me to be not worth what DLSS is, at least what we've seen so far.

If they increase the applicability of DLSS I could perhaps swing my opinion, but for example I have a Nvidia-approved Gysnc 34" Ultrawide Asus display. It is 3440x1440, and so DLSS is essentially totally wasted in my case. And it would have to be absolutely startlingly impressive to make up for essentially costing what could have been a 50% across the board boost.
 

coercitiv

Diamond Member
Jan 24, 2014
3,357
885
136
#58
I looked at lots of pics, but was unable to get any sense of scale between an RT and Tensor core.

Overall, it just looks like Tensor cores cover a lot of real estate.
From TechPowerup's Turing architecture info:
a Tensor Core takes up approximately eight times the die area of CUDA cores, and the SM has 8 Tensor Cores, or 96 Tensor Cores per GPC and 576 across the whole GPU. The RT core is the largest indivisible component in an SM, and there is only one of these per SM, 12 per GPC, and 72 across the die.
 
Oct 27, 2006
19,794
302
126
#60
Look at my post history, I posted a shot of it fairly recently, reflecting the 50/25/25 split (conventional/RT/tensor respectively).
 

Dribble

Golden Member
Aug 9, 2005
1,709
130
126
#62
Just to state the obvious but you've just taken a picture of the chip and arbitrarily chopped it into 3 boxes that don't look like they match anything on the chip underneath. So it doesn't explain how you know it's a 50/25/25 split?
 
Mar 10, 2004
28,523
238
126
#63
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
3,357
885
136
#64
Just to state the obvious but you've just taken a picture of the chip and arbitrarily chopped it into 3 boxes that don't look like they match anything on the chip underneath. So it doesn't explain how you know it's a 50/25/25 split?
That doesn't seem accurate, given other pictures that are out there.
While that split is likely not accurate, it's not as arbitrary as you think, as it was made by Nvidia themselves during the Turing presentation.

The only hard number we have so far is 1 Tensor core equals roughly 8 CUDA cores, which would put tensor core area on parity with CUDA cores. If this is true, even a 35/35/30 split might be possible.

I would also encourage you to find better and more accurate sources to help us model this better rather than conveniently discarding the little data we have so far. Accurate or not, it's the best we got, and it points towards ~50% of the die being dedicated to traditional raster hardware.
 
Oct 27, 2006
19,794
302
126
#66
Just to state the obvious but you've just taken a picture of the chip and arbitrarily chopped it into 3 boxes that don't look like they match anything on the chip underneath. So it doesn't explain how you know it's a 50/25/25 split?
I didn't divide that image lol :)
 
Oct 27, 2006
19,794
302
126
#67
Just for the fun of it, I took the block diagram from the second post there, as it seems to be more accurate, and just used it for some rough math.

We get a resolution of 1312x2200 representing these elements. Of those, exclusively standard features of INT/FP32 take up 340x512 pixels, Tensor 218x510, and RT 1235x362. Taking this volume and then compensating for the 4x of each block of INT/FP and Tensor, you get the following volumes :

Standard : 696,320
Tensor : 472,720
Raytracing : 447,070

For a % division of 43%, 29%, and RT 28%.

This is only in relation to each other, and in an effort to provide some 'back of napkin' math towards how these resources are split in terms of die utilization. Obviously all of these elements need access to the buses and memory registers to do their work, so that's why I left everything else off the total as they have to be there for any of this to work anyway. However, if you want to split it as (everything else vs Tensor vs RT), the percentages get closer to 50/26/24.

Interesting. Of course this is just using this particular image for this rough math, aspect rations and scale may be off, who knows.
 
Oct 6, 2016
140
59
71
#68
I noticed my GPU runs much hotter in BF5 vs Quake Champions, that runs much cooler without setting up any fan profile.

For some reason I thought I was getting the full game version with RTX card but they want me to upgrade...bla. Not really digging this game enough to upgrade. The graphics look beautiful, but the physics and movement seem exactly the same as 15 years ago.
 
Mar 28, 2005
184
106
116
#69
Did anyone notice that after the Vega1 launch, Bacon1 stopped posting after hyping it up and a new account ub4ty popped up just before this and now ub4ty stopped posting after hyping up Navi and canceled their account after this Vega 2 launch/Navi no show.
 


ASK THE COMMUNITY