Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 194 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
Personally, I don't see how Nvidia can charge anything more than $700 for a 3080Ti. It will perform slightly better than a 6800 XT but have much less Vram and worse power consumption. I hope Nvidia charges $1000 for it just so I can have a good reason to point and laugh. I honestly think the only option is to simply replace the 3080 with the Ti model and move the 3080 down the stack in the form of a 3070Ti. Nvidia simply didn't offer enough for the money.

You do realize that we haven't seen how 6800XT performs in the vast majority of titles and especially the Nvidia favored ones right? Also that most probably when taken into account RT performance it will be at least 30% slower which might not be important to you, but it is for me and a lot of users. When I pay a significant amount of money for a GPU I need to have the best visual fidelity available for a given cost.. And 6800XT might be great in rasterization and most probably will be trading blows with 3080 in this regard in all games, but in RT performance it will be like a previous gen one. And that is something we will all experience at many games from now on while if 10Gb of quick VRAM will be enough for 4k in future games remains to be seen. It's something that will rarely happen in the next 2 years while having to play with inferior RT performance is a given.
Also since you mentioned power consumption which is a valid point I do thoroughly believe that all AIB factory o/ced to the limit 6800/6900 cards will push the envelope much higher than what the official requirements are..
A 3080 Ti model will only be out so that Nvidia can be at 3090 levels for the prestige while adding 2 more Gb of VRAM as admittently the 3080 should have been in the first place. 12Gb of GDDR6x are more than enough for everything @4k and a much better combo than 16Gb of simple GDDR6 which you will never need unless you want to go 8k.. But even then the differences between 6800XT-6900XT-3080-3080Ti will be so small that they are almost negligible for traditional rasterization..In RT though both 6800XT and 6900XT will perform like Turing models and have nowhere close to Ampere performance.
So to wrap things up saying that Nvidia which will be far more balanced as a package than what 6800XT is, is not worth THE money and not YOUR money is a far streched idea you probably want others to adopt.. 3080 is here to stay and if you feel safe having 16Gb of GDDR6 (which you most probably will never use) rather than the whole package it's up to you. But it would be nice if you didn't vaguely believe others adopt this idea of Nvidia not simply offering much for the money especially with 3080..3090 is a joke for gamers that is..
 
Last edited:

samboy

Senior member
Aug 17, 2002
217
77
101
You do realize that we haven't seen how 6800XT performs in the vast majority of titles and especially the Nvidia favored ones right? Also that most probably when taken into account RT performance it will be at least 30% slower ......

Good point that we still need to wait for independent reviews....... I suspect the reality is that AMD optimized their drivers for these "launch benchmark titles" and are likely continuing this process for other titles. It's all down to the software at this point and will be interesting to see how close Nvidia and AMD are in 6-9 months time?

Assuming 6800XT is roughly equivalent to the RTX3080 then I will chose 16GB or RAM in place for a 30% drop in ray tracing performance. NVidia has a software advantage here today as they have helped game developers optimize their RT titles for the last couple of years; using some Nvidia specific API's if I understand correctly. NVidia will definitely have an advantage in the short term. However, AMD has the PS5 and new XBox and I expect that developers will more likely optimize for the AMD ecosystem moving forward........ so the 6800XT should be good enough for most games in RT performance (and this is more powerful than what the consoles ship with). My assumption is that if you are a games publisher then your most important platforms are console followed by PC; your main optimization will be for console.
 
  • Like
Reactions: lobz

PhoBoChai

Member
Oct 10, 2017
119
389
106
Interesting analysis by Hardware Unboxed on the Ampere architecture. He's emphasizing again that the "doubling" of Ampere Cuda cores doesn't start to shine until you hit 4k resolutions. He states that the FP32 workload at 4k is higher than at lower resolutions and that the vertex and triangle load is identical at 1440p and 4k (which is why the performance increase vs Turing at 1440p isn't as impressive as at 4k). He also shows that a CPU bottleneck does account for the less than stellar resolution scaling for some games and some resolutions, but is only a partial answer (the doubled up of FP32 performance being the other part).

View attachment 31465

If AMD's numbers are right for the 1440p data, Big Navi gains more over 3080 and 3090 at these resolutions.

4K is still niche, so review day, 1080p and especially 1440p is going to matter to more gamers.

NV should have gone to 8 x GPC for the 3080/90 with that many FP32 ALUs IMO, for good scaling across resolution..
 

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
Good point that we still need to wait for independent reviews....... I suspect the reality is that AMD optimized their drivers for these "launch benchmark titles" and are likely continuing this process for other titles. It's all down to the software at this point and will be interesting to see how close Nvidia and AMD are in 6-9 months time?

Assuming 6800XT is roughly equivalent to the RTX3080 then I will chose 16GB or RAM in place for a 30% drop in ray tracing performance. NVidia has a software advantage here today as they have helped game developers optimize their RT titles for the last couple of years; using some Nvidia specific API's if I understand correctly. NVidia will definitely have an advantage in the short term. However, AMD has the PS5 and new XBox and I expect that developers will more likely optimize for the AMD ecosystem moving forward........ so the 6800XT should be good enough for most games in RT performance (and this is more powerful than what the consoles ship with). My assumption is that if you are a games publisher then your most important platforms are console followed by PC; your main optimization will be for console.

First of all from the looks of it AMD has finally caught up in rasterization performance and actually has some new features that look really promising.
Now from what I highlighted actually the former is not correct. Nvidia has invested from Turing generation to h/w specifically designed for raytracing acceleration with RT cores which combined with the rest of the new architecture gives significantly better results in these situations. Of course Nvidia invests in game optimizations in coop with game devs but this is mostly to incorporate RTX tech in most games possible. CP2077 is one of those and a huge marketing asset for the green team.
The latter needs rephrasing. Since both consoles have RDNA tech built in and many features will be based on MS API features, gradually it will be an asset for many games that are ported in PC. Don't expect miracles in RT performance though even 2080Ti from previous gen will be most probably faster in RT performance than the current AMD RDNA 2 models, but surely less optimization will be needed when those games will be arriving in PC.. As we speak both companies have great products and games will gradually tend to incorporate more features like these in them.

If AMD's numbers are right for the 1440p data, Big Navi gains more over 3080 and 3090 at these resolutions.

4K is still niche, so review day, 1080p and especially 1440p is going to matter to more gamers.

NV should have gone to 8 x GPC for the 3080/90 with that many FP32 ALUs IMO, for good scaling across resolution..

Nvidia has gone full FP32 for ALL cores and this is why in lower resolutions there's a stall in performance since FP32 cores are busy doing relatively more INT ops.. It's not as effective especially for 1080p and 1440p as they can't compute both FP32 and INT32 at the same time as let's say Turing is. They wanted to make GPU more effective in compute as well and this is why when things get demanding like 4k shows its power..
And I suspect this is why it is slightly better in Vulcan as well since more FP32 ops are needed in games.
 
Last edited:

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
The 3080/90 are also fast enough that they're genuinely very clearly 4k cards. The 3070 does anything else quite well enough (surely?).

So balancing them to work better at 4k is broadly pragmatic.
 

samboy

Senior member
Aug 17, 2002
217
77
101
First of all from the looks of it AMD has finally caught up in rasterization performance and actually has some new features that look really promising.
Now from what I highlighted actually the former is not correct. Nvidia has invested from Turing generation to h/w specifically designed for raytracing acceleration with RT cores which combined with the rest of the new architecture gives significantly better results in these situations. Of course Nvidia invests in game optimizations in coop with game devs but this is mostly to incorporate RTX tech in most games possible. CP2077 is one of those and a huge marketing asset for the green team.
The latter needs rephrasing. Since both consoles have RDNA tech built in and many features will be based on MS API features, gradually it will be an asset for many games that are ported in PC. Don't expect miracles in RT performance though even 2080Ti from previous gen will be most probably faster in RT performance than the current AMD RDNA 2 models, but surely less optimization will be needed when those games will be arriving in PC.. As we speak both companies have great products and games will gradually tend to incorporate more features like these in them.



Nvidia has gone full FP32 for ALL cores and this is why in lower resolutions there's a stall in performance since FP32 cores are busy doing relatively more INT ops.. It's not as effective especially for 1080p and 1440p as they can't compute both FP32 and INT32 at the same time as let's say Turing is. They wanted to make GPU more effective in compute as well and this is why when things get demanding like 4k shows its power..
And I suspect this is why it is slightly better in Vulcan as well since more FP32 ops are needed in games.

I believe that software optimization has a lot to do with how ray tracing performs in games and is an important part of the picture. I fully agree that NVidia likely has superior hardware in this area; it's second generation and they have likely addressed soft spots from their first generation. However, the reality is that none of the current hardware is fast enough for 4k real time ray tracing & how you end up using it comes down to clever software optimization; like most things in gaming. For example, if you want to show the reflections in a puddle you are likely going to only ray trace the pixels where the puddle projects to the screen; not the entire frame etc. The end result is a combination of hardware and software optimizations (and there will be common techniques/libraries designed to help here). Of course this applies to both AMD and NVidia; but NVidia also has optimized games on the market today and this will give them additional advantage above the hardware advantage alone for now.

The question on the table is how much developers will be willing to invest in Nvidia specific optimizations down the road? AMD RT performance on the console will likely be the baseline and AMD software optimization will be more important; less work required on NVidia since the hardware advantage will close the gap anyway? Of course both AMD and NVidia will help publishers with certain titles to try and get a performance advantage........ this all comes down to software.

The other possible software aspect is that AMD has mapped the 16GB of memory, over the PCIe 4.0 bus to the main processor address space. I read somewhere that there are some extras if you have both a 5000 series processor and an AMD card. I'm wondering if they have extended the main processor cache synchronization (which you need to coordinate the multiple cores) with the 128MB cache on the graphics card? If so, then this opens up the possibility of the main processor working directly with the GPU in ways that could not have been done in the past. It will all be software the defines what can be opened up/possible here.
 
  • Like
Reactions: PhoBoChai

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
I believe that software optimization has a lot to do with how ray tracing performs in games and is an important part of the picture. I fully agree that NVidia likely has superior hardware in this area; it's second generation and they have likely addressed soft spots from their first generation. However, the reality is that none of the current hardware is fast enough for 4k real time ray tracing & how you end up using it comes down to clever software optimization; like most things in gaming. For example, if you want to show the reflections in a puddle you are likely going to only ray trace the pixels where the puddle projects to the screen; not the entire frame etc. The end result is a combination of hardware and software optimizations (and there will be common techniques/libraries designed to help here). Of course this applies to both AMD and NVidia; but NVidia also has optimized games on the market today and this will give them additional advantage above the hardware advantage alone for now.

The question on the table is how much developers will be willing to invest in Nvidia specific optimizations down the road? AMD RT performance on the console will likely be the baseline and AMD software optimization will be more important; less work required on NVidia since the hardware advantage will close the gap anyway? Of course both AMD and NVidia will help publishers with certain titles to try and get a performance advantage........ this all comes down to software.

The other possible software aspect is that AMD has mapped the 16GB of memory, over the PCIe 4.0 bus to the main processor address space. I read somewhere that there are some extras if you have both a 5000 series processor and an AMD card. I'm wondering if they have extended the main processor cache synchronization (which you need to coordinate the multiple cores) with the 128MB cache on the graphics card? If so, then this opens up the possibility of the main processor working directly with the GPU in ways that could not have been done in the past. It will all be software the defines what can be opened up/possible here.

That is true no h/w today is able to achieve great results in something meaningful in RT nowadays. It's better than nothing though I guess, we have to gradually move forward.Photorealism actually never ends and is exponentially more demanding in resources.. For example as you bounce more surfaces from available light sources or when you want to use RT with GI or shadows things are getting way more demanding. S/W is an integrated part of optimization and what we see today on our screen is barely scrathing the surface of this magnificent world and photorealism.
The way I see it, all of us gamers are winning since consoles are leveling up in h/w and it's really important that they have RT support integrated, which in turn means game devs will devote some time in the near future to take advantage of that. Distinct optimizations from either AMD or Nvidia is not a current phenomenon, it existed on many levels for many years in many things.. (Shader models supported, tesselation support, physx etc etc).. This might give the upper hand to one like Nvidia in many RTX titles for the time being, but eventually AMD will catch up in this area as well.. What we don't want is to hinder evolution and this happens sometimes when there's no competition available.. This is why it is so important news that AMD has caught up after many years in the high end.. It will push Nvidia and itself for better products rather than stalling and give us products with poor value..
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
I wrote "Additionally 3080 uses GDDR6x which are faster and give more memory bandwidth for the given amount of VRAM.. ". I think you misundestood the meaning of this because what this means is that for a given amount of vram higher memory speed increases the memory bandwidth significantly in contrast to plain GDDR6. To give a clear example 2080Ti uses GDDR6 and has higher amount of VRAM than 3080 but has ~616GB/s memory bandwidth in contrast to 3080 which has ~760GB/s. What this means in numbers is that despite the 1 extra Gb of VRAM 2080Ti has ~23.3% less memory bandwidth than 3080..
Amount of VRAM is not a deciding factor solely by itself and nowhere in my statement did I say that it will be enough or not for 4k in the future but for now it's plenty enough.. And of course nowhere did I compared in this sentence the 6800XT vram/memory bandwidth to 3080s.. I stated that the total difference is not a simple 16vs10Gb of VRAM as some may think..Just some food for thought ;)
OK I'll put it a bit simpler: should the independent reviews say that neither the 3080 nor the 6800XT is bandwidth limited, it can become a purchase swinging factor. Not sure it will, but it definitely can.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
The 3080/90 are also fast enough that they're genuinely very clearly 4k cards. The 3070 does anything else quite well enough (surely?).

So balancing them to work better at 4k is broadly pragmatic.
Well, if their target audience were people who actually own a 4k display, that'd be a very-very sad market prognosis for the 3080. So with all due respect, no.
 

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
OK I'll put it a bit simpler: should the independent reviews say that neither the 3080 nor the 6800XT is bandwidth limited, it can become a purchase swinging factor. Not sure it will, but it definitely can.

Independent reviews will tell you what? If it will suffice or not? Nobody knows mate if more vram will be needed with 100% certainty.. Most probably will but not with certainty. But you asked which one will be more bandwidth limited , which with the literal meaning 6800XT is so much more than 3080.. 6800XT has ~48% less memory bandwidth than 3080 that is 512 vs 760GB/s. Don't confuse memory bandwidth with amount of VRAM they are both equally important and serve complimentary but different purposes.
I didn't read this anywhere it's pure logic and I don't expect avg Joe to care about that. Thing is all I see in the net is that this X amount of vram is much higher than this Y amount of vram.. Well all I'm saying is that it's not as simple, simply measuring the amount of vram without taking into account the most important thing the memory bandwidth is like measuring the quality of photographic equipment solely on Mp..
And don't expect things to change so easily 10Gb of GDDRx will most probably be just fine at least for the next couple of years. 12 of the same quick memory would have been the sweet spot, 16 is plain waste..AMD didn't put 16Gb of GDDRx it put 16Gb of cheaper and slower GDDR6. If it had put GDDR6x at these amounts the price of it would have been at least the price of 6900XT if not more.
Also keep in mind that in the vast majority of cases even high end cards like 3080,6800XT will face issues with fillrate limitations before they even reach bandwidth limitations.. (this means 9/10 times, games will not have enough performance in their tank for a given modern game @4k before they reach bandwidth limitation.. ). A recent example is Watch Dogs Legion which is averaging 30-32fps @4k without DLSS and with RTX Ultra. It doesn't have any bandwidth limitations though so you get the idea by now I think..

Well, if their target audience were people who actually own a 4k display, that'd be a very-very sad market prognosis for the 3080. So with all due respect, no.

No it will not this is your idea of a poor market prognosis. As I said 12 would have been perfect but 10 will suffice, I own one (3080) I play @4k for more than 5 years now (980Ti SLi, 1080Ti, 2080Ti and now 3080) and let me say I know the perks of this resolution (4k) pretty well by now in a vast array of games, to understand that 10Gb of GDDRx will be enough. If anything I think 16Gb is to purely impress the avg Joe who thinks that 16>>>>10 so 6800XT will be quicker in memory terms, when as i previously mentioned ,3080 has actually much higher memory bandwidth with 760GB/s while 6800XT has only 512GB/s ..
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
4,747
4,691
136
Independent reviews will tell you what? If it will suffice or not? Nobody knows mate.. I didn't read this anywhere and I don't expect avg Joe to care about that. Thing is all I see in the net is that this X amount of vram is much higher than this Y amount of vram.. Well all I'm saying is that it's not as simple, simply measuring the amount of vram without taking into account the most important thing the memory bandwidth is like measuring the quality of photographic equipment solely on Mp..
And don't expect things to change so easily 10Gb of GDDRx will most probably be just fine at least for the next couple of years. 12 of the same quick memory would have been the sweet spot, 16 is plain waste..AMD didn't put 16Gb of GDDRx it put 16Gb of cheaper and slower GDDR6. If it had put GDDR6x at these amounts the price of it would have been at least the price of 6900XT if not more.
Also keep in mind that in the vast majority of cases even high end cards like 3080,6800XT will face issues with fillrate limitations before they even reach bandwidth limitations.. (this means 9/10 times, games will not have enough performance in their tank for a given modern game @4k before they reach bandwidth limitation.. ). A recent example is Watch Dogs Legion which is averaging 30-32fps @4k without DLSS and with RTX Ultra. It doesn't have any bandwidth limitations though so you get the idea by now I think..



No it will not this is your idea of a poor market prognosis. As I said 12 would have been perfect but 10 will suffice, I own one (3080) I play @4k for more than 5 years now (980Ti SLi, 1080Ti, 2080Ti and now 3080) and let me say I know the perks of this resolution (4k) pretty well by now in a vast array of games, to understand that 10Gb of GDDRx will be enough. If anything I think 16Gb is to purely impress the avg Joe who thinks that 16>>>>10 so 6800XT will be quicker in memory terms when 3080 has actually much higher memory bandwidth with 760GB/s while 6800XT has only 512GB/s .. lol
With the individual memory die capacities, it's either 8 GB or 16 GB. It's not like they could have used 10, 11,12, 13, etc with a 256 bit bus, so there goes the "impress avg Joe argument". Another thing is that I have a suspicion that what we know about GPU fundamental characteristics are about to be shaken.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
With the individual memory die capacities, it's either 8 GB or 16 GB

Exactly. And for NV it's either 10 or 20gb. While 10gb is on the low end and 20gb overkill. The problem here are this fixed possible sizes. Another reason infinity cache makes sense as with >256 bit bus you either have too little or too much vram. They would have to go much higher bus width to reach a sensible amount. I think a grave fault of GA102. A even bigger but would have allowed for 12GB (more than 1080TI and hence the issue wouldn't be one)
 
  • Like
Reactions: maddie

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
With the individual memory die capacities, it's either 8 GB or 16 GB. It's not like they could have used 10, 11,12, 13, etc with a 256 bit bus, so there goes the "impress avg Joe argument". Another thing is that I have a suspicion that what we know about GPU fundamental characteristics are about to be shaken.

Who said they could have used otherwise with this type of memory and bit bus? They could have used bigger bit bus though but I suspect they didn't want to use a more expensive one,in order not drive the cost higher.. I said they used GDDR6 and avg Joe gets impressed by simply seeing 16>>>>10 so 6800XT is killer when obviously he hasn't taken into account the 512GB/s vs 760GB/s element of memory bandwidth into account.. I've added some things in the post which I didn't before, my bad .
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
5,321
8,005
136
Who said they could have used otherwise with this type of memory and bit bus? They could have used bigger bit bus though but I suspect they didn't want to use a more expensive one,in order not drive the cost higher.. I said they used GDDR6 and avg Joe gets impressed by simply seeing 16>>>>10 so 6800XT is killer when obviously he hasn't taken into account the 512GB/s vs 760GB/s element of memory bandwidth into account.. I've added some things in the post which I didn't before, my bad .

With infinity cache the 6800xt has 1.1 TB/s effective memory bandwidth based upon AMD's measurements. I honestly don't understand what points you have been trying to make are except you think 10 GB of VRAM is enough.
 

linkgoron

Platinum Member
Mar 9, 2005
2,300
821
136
With infinity cache the 6800xt has 1.1 TB/s effective memory bandwidth based upon AMD's measurements. I honestly don't understand what points you have been trying to make are except you think 10 GB of VRAM is enough.

He's trying to justify his 3080 purchase vs what might end up as a cheaper, more efficient, similarly performant card which also has more memory and thus might be more future proof by cherry picking one specific element which the 3080 beats the 6800XT by design.

In reality, AMD have made their choice of pros/cons, with something that looks like a somewhat novel design choice. I think we need to see more performance numbers before declaring if they made a mistake or if its a good solution. Also, the 6800XT has 33% less bandwidth not 48%.
 

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
With infinity cache the 6800xt has 1.1 TB/s effective memory bandwidth based upon AMD's measurements. I honestly don't understand what points you have been trying to make are except you think 10 GB of VRAM is enough.

So I'll tell you what.. Let's wait and see what infinity cache will eventually do, let's wait and see how AMD performs in a vast array of titles and let's just slip under the carpet DLSS and RT performance.. Only VRAM matters from some people's quotes.. I honestly can't be more clear with my posts so I'm sorry if you can't understand any further




He's trying to justify his 3080 purchase vs what might end up as a cheaper, more efficient, similarly performant card which also has more memory and thus might be more future proof by cherry picking one specific element which the 3080 beats the 6800XT by design.

In reality, AMD have made their choice of pros/cons, with something that looks like a somewhat novel design choice. I think we need to see more performance numbers before declaring if they made a mistake or if its a good solution. Also, the 6800XT has 33% less bandwidth not 48%.

Clever remark my bad which quite frankly tells me a lot but.. Yeah 6800XT has 33% less memory bandwidth than 3080 and 3080 has 48% higher memory bandwidth than 6800XT.. We usually use the latter to express percentages but anyway, gotcha.. I got to "justify" my purchase right? ;)

Sure I am biased guys and trying to "justify" my purchase.. Yeah that must be it..

If this is what you understood from what I wrote then by all means believe what you want.. Lol it's the internet anyway.. I didn't see you comment in the "far stretched comments" of AMD fans in here but yeah I'm the one here that is biased.. /sigh
I can understand by what some people quote or not who is biased and who tries to justify what.. ;) Not once did I try to downplay AMD's offerings while obviously tried to counter some far stretched claims..
 
Last edited:

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
Can we all please stop coloring text? Why intentionally make everything harder to read?

I mean it's there available in options for a reason and I don't see how it makes it more difficult to read I just highlighted it (not using caps or enlarging the font) since it seems some members have trouble reading my whole posts and nitpicking stuff.. Anyway there's no point discussing it even further points are made and I simply wanted to give a rational argument in some absurd comments I've read about "finished" products before they even begin and stuff like that..
 

JujuFish

Lifer
Feb 3, 2005
11,003
735
136
I mean it's there available in options for a reason and I don't see how it makes it more difficult to read I just highlighted it (not using caps or enlarging the font) since it seems some members have trouble reading my whole posts and nitpicking stuff.. Anyway there's no point discussing it even further points are made and I simply wanted to give a rational argument in some absurd comments I've read about "finished" products before they even begin and stuff like that..
Being an option doesn't mean you should use it. It makes it harder to read because there's far less contrast if you're using the dark theme and there are far less garish ways to highlight text without using colors, like bold/underline/italics
 
  • Like
Reactions: Cableman

maddie

Diamond Member
Jul 18, 2010
4,747
4,691
136
So I'll tell you what.. Let's wait and see what infinity cache will eventually do, let's wait and see how AMD performs in a vast array of titles and let's just slip under the carpet DLSS and RT performance.. Only VRAM matters from some people's quotes.. I honestly can't be more clear with my posts so I'm sorry if you can't understand any further





Clever remark my bad which quite frankly tells me a lot but.. Yeah 6800XT has 33% less memory bandwidth than 3080 and 3080 has 48% higher memory bandwidth than 6800XT.. We usually use the latter to express percentages but anyway, gotcha.. I got to "justify" my purchase right? ;)

Sure I am biased guys and trying to "justify" my purchase.. Yeah that must be it..

If this is what you understood from what I wrote then by all means believe what you want.. Lol it's the internet anyway.. I didn't see you comment in the "far stretched comments" of AMD fans in here but yeah I'm the one here that is biased.. /sigh
I can understand by what some people quote or not who is biased and who tries to justify what.. ;) Not once did I try to downplay AMD's offerings while obviously tried to counter some far stretched claims..
Concerning your red remark, you're still stuck in an old way of thinking. Yes, I agree that we should wait on independent testing, but at the very least, we can already see that this smaller bus with an appropriate cache can perform at least as well as a traditional wider bus.

Would you have said that before?

The term memory bandwidth has become more complex with the IC layout. Just like flops is not an accurate predictor by itself.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,949
504
126
Can we all please stop coloring text? Why intentionally make everything harder to read?
Because we can.

My take on infinity cache is, what's the downside and will it matter in practice? I don't want to call shens on AMD's numbers since they've been reliable recently, but still a bit skeptical that a massive cache can be such an effective bandwidth enhancer.
 

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
Concerning your red remark, you're still stuck in an old way of thinking. Yes, I agree that we should wait on independent testing, but at the very least, we can already see that this smaller bus with an appropriate cache can perform at least as well as a traditional wider bus.

Would you have said that before?

The term memory bandwidth has become more complex with the IC layout. Just like flops is not an accurate predictor by itself.

Old way of thinking is taking every single claim AMD has made thus far w/o concrete evidence in real life scenarios an old way of thinking? I've read what it can do thank you very much I don't need clarification, pardon me if I don't believe everything I see w/o first validating it..
While you are thinking in future terms when obviously you seem to cancel or "ignore" every single asset Nvidia has to offer in this gen and that is actually proven to be working and is tested for a prolonged period of time like DLSS and better RT performance? Dude srsly?
Would I have said what before? This is exactly my point. That some people see much higher available vram and think that this is the holy grail by itsel.. Did you read the comments before mine and why did I start commenting in the first place or those didn't bother you?
Nevermind I think those that understand the points know perfectly well what we both are talking about here..
I'm not trying to persuade anyone buy anything I simply don't care, but seeing heavily biased opinions with wishful thinking is not my cup of tea.. That is all..
 
Last edited: