Question RTX 4000/RX 7000 price speculation thread

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
My prediction: The entire generation will be 2-3X msrp on ebay and at retailers. RTX 3000 series will be sold along side the 4000 series because only a few will be buying RTX 4000 series who are willing to pay $1500 for what should be a $300 RTX 4060. Not enough supply to meet demand by a long shot, pricing will be through the Oort cloud. PC gaming is dead. Your thoughts?
 

Golgatha

Lifer
Jul 18, 2003
12,651
1,514
126
They are greatly increasing the L2 size to help with the memory bandwidth.

True, but if you don't program for it, you get a hit on the extra cache about 60% of the time. I'll take 50% more bandwidth all the time over that. Also, it doesn't scale well to higher resolutions because it improves performance by using data for multiple frames. So, as I understand it (and it seems to show in benchmarks), you get a nice boost when you're at higher FPS, i.e. lower resolutions, but it levels off as you get to higher resolutions and lower FPS (less frames able to share data from the L2 cache). My issue here is I'm not buying a 3070+ or 6800+ class GPU to play anything under 1440p personally. Now a 4080 Ti with a 384 bit memory bus plus some short of L2 cache, and 16GB+ VRAM is pretty exciting to me and I think would be a good 5+ year gaming card; might even be worthy of 1080 Ti type longevity comparisons later down the line.

Perfect example of the infinity cache helping AMD 6800 keep up with a nVidia 3080 at lower resolution, then get absolutely trounced as the resolution scales up.
Horizon-Zero-Dawn-765x920.jpg

Relatedly, the 3070 with the same 256 bit bandwidth, is essentially neck and neck with the 6800 by the time you get to 4k resolution. Neither one of them has the bandwidth to really perform at that resolution. I think having 50% more memory bandwidth is why the 1080 Ti aged better than the regular 1080 as well; more resources are always more after all. :)
 
Last edited:

Leeea

Diamond Member
Apr 3, 2020
3,625
5,368
136
True, but if you don't program for it, you get a hit on the extra cache about 60% of the time. I'll take 50% more bandwidth all the time over that. Also, it doesn't scale well to higher resolutions because it improves performance by using data for multiple frames. So, as I understand it (and it seems to show in benchmarks), you get a nice boost when you're at higher FPS, i.e. lower resolutions, but it levels off as you get to higher resolutions and lower FPS (less frames able to share data from the L2 cache). My issue here is I'm not buying a 3070+ or 6800+ class GPU to play anything under 1440p personally. Now a 4080 Ti with a 384 bit memory bus plus some short of L2 cache, and 16GB+ VRAM is pretty exciting to me and I think would be a good 5+ year gaming card; might even be worthy of 1080 Ti type longevity comparisons later down the line.

Perfect example of the infinity cache helping AMD 6800 keep up with a nVidia 3080 at lower resolution, then get absolutely trounced as the resolution scales up.

Relatedly, the 3070 with the same 256 bit bandwidth, is essentially neck and neck with the 6800 by the time you get to 4k resolution. Neither one of them has the bandwidth to really perform at that resolution. I think having 50% more memory bandwidth is why the 1080 Ti aged better than the regular 1080 as well; more resources are always more after all. :)

I think your example is flawed.

The rx6800 was designed to compete with the rtx3070, and on your charts it does that very well. Beating it on every test.


A better comparison would be the rx6800xt vs the rtx3080, which is more interesting:

Same cache as the rx6800, same memory bandwidth of the rx6800, but it keeps up with the rtx3080 much more closely. There is more going on then just cache and memory bandwidth.


-------------------------------------------------------------------------------------------------


I think you need to rewind your perspective on the whole subject. It is all trade offs.

Memory Bandwidth: each memory line uses a lot of space on the chip. The contact pad, the power circuitry, noise control. Memory bandwidth is expensive, power hungry, and manufacturing defects are more harmful.

Cache: uses lots of space on the chip. However, power efficient, and defects can be planned for and do not ruin the entire cache. However, if a chip does not have enough memory bandwidth to start with, all the cache in the world is not going to do much.


At a certain threshold, different for each rendering resolution, adding more cache is more effective then adding more memory bandwidth. Yes, that threshold is lower for 1080p then it is for 4k, but it is still true at 4k. It is a balancing act. Add in computation assets, clock speed, etc, and it is never going to be a simple question of memory bandwidth vs cache.


AMD this generation went for more cache, and it worked for them. Nvidia went for more bandwidth, and it worked for them.

To claim that cache does not work at 4k is incorrect however, as AMDs has cache heavy designs with 256 memory busses that do work very well at 4k. The rx6800xt is within 5% of the rtx3080 at 4k, while only having 67% of the memory bandwidth* of the rtx and 78% of the wattage**.

*512 / 760.3 GB/s
**250 watts / 320 watts


As we move forward, with rtx4000 series already rumored to break power supplies, I suspect you will see nvidia beginning to move toward cache heavy designs also. Power consumption is going to begin to matter.
 
Last edited:

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,846
3,190
126
Leeea is correct..
The 6800XT was the 3080's competition.
The 6800 regular was a unicorn which was almost impossible to find even more then the XT version, but that was meant for the 3070.
And the 6900XT was the 3090's competitor.

There was no regular 6900 to my knowledge, because everyone knows the moment you step into 4 figure msrp for videocards, you better slap the word TURBO / XT / TI / UBER / GAMER / FTW or it wont sell.
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
They are greatly increasing the L2 size to help with the memory bandwidth.

Right. Effectively doing their own infinity cache. For people (like me) that play at 1080p240hz it's ideal. For people looking to play at 4k it can and will likely perform worse versus an alternative with higher bandwidth. However, given both AMD and NV went this route it's rather a moot point now I suppose.
 

DeathReborn

Platinum Member
Oct 11, 2005
2,746
741
136
I think your example is flawed.

The rx6800 was designed to compete with the rtx3070, and on your charts it does that very well. Beating it on every test.


A better comparison would be the rx6800xt vs the rtx3080, which is more interesting:

Same cache as the rx6800, same memory bandwidth of the rx6800, but it keeps up with the rtx3080 much more closely. There is more going on then just cache and memory bandwidth.


-------------------------------------------------------------------------------------------------


I think you need to rewind your perspective on the whole subject. It is all trade offs.

Memory Bandwidth: each memory line uses a lot of space on the chip. The contact pad, the power circuitry, noise control. Memory bandwidth is expensive, power hungry, and manufacturing defects are more harmful.

Cache: uses lots of space on the chip. However, power efficient, and defects can be planned for and do not ruin the entire cache. However, if a chip does not have enough memory bandwidth to start with, all the cache in the world is not going to do much.


At a certain threshold, different for each rendering resolution, adding more cache is more effective then adding more memory bandwidth. Yes, that threshold is lower for 1080p then it is for 4k, but it is still true at 4k. It is a balancing act. Add in computation assets, clock speed, etc, and it is never going to be a simple question of memory bandwidth vs cache.


AMD this generation went for more cache, and it worked for them. Nvidia went for more bandwidth, and it worked for them.

To claim that cache does not work at 4k is incorrect however, as AMDs has cache heavy designs with 256 memory busses that do work very well at 4k. The rx6800xt is within 5% of the rtx3080 at 4k, while only having 67% of the memory bandwidth* of the rtx and 78% of the wattage**.

*512 / 760.3 GB/s
**250 watts / 320 watts


As we move forward, with rtx4000 series already rumored to break power supplies, I suspect you will see nvidia beginning to move toward cache heavy designs also. Power consumption is going to begin to matter.

To me the 6700XT was the competitor to the 3070 as it is closer in price (MSRP lie to MSRP lie at the time) but on performance the 6700XT is a competitor to the 3060 Ti. AMD had the luxury of being able to price their cards based on the already unavailable 30 series that had already launched. AMD themselves say the 6800XT has effectively 1,664GB/s of bandwidth thanks to the cache, to me that puts the 6800XT memory bandwidth at 512-1,664GB/s.
 
  • Like
Reactions: GodisanAtheist

GodisanAtheist

Diamond Member
Nov 16, 2006
6,824
7,187
136
To me the 6700XT was the competitor to the 3070 as it is closer in price (MSRP lie to MSRP lie at the time) but on performance the 6700XT is a competitor to the 3060 Ti. AMD had the luxury of being able to price their cards based on the already unavailable 30 series that had already launched. AMD themselves say the 6800XT has effectively 1,664GB/s of bandwidth thanks to the cache, to me that puts the 6800XT memory bandwidth at 512-1,664GB/s.

Yup. 6800 sits in a sort of no man's land between 3070/6700xt/2080ti performance and 3080/6800xt performance.

On average it is ~10-15% faster than the 3070 class cards and an equivalent amount slower than the 3080 class cards.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,846
3,190
126
Let's see what can run 5120x1440 @ 200+ fps. :p

im sure a 3060ti could do that if you thow valerant or counter strike source at it.

Now if your talking modern warfare, or any other AAA title like cyberpunk, then, your gonna need a rtx 9000, cuz its gonna need to have over 9000 performance.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,846
3,190
126
too many things i need to buy this year...

4090....
Samsung Arc.

Ugh.. i better start saving... oh wait i already started after hearing rumors of the arc costing somewhere near 2499.
 
  • Like
Reactions: CP5670

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,846
3,190
126
Just get an Intel Arc instead. Will save you a lot of money :p

mmmm... but a 55" curved which can do portrait, has no possibilities of disappointing me, especially if its MLED or QD-OLED.


Capture.JPG

look at that monitor... you can't say it won't put a smile on your face...

Intel's Arc.... err... most likely if its a smile, its an upside down one.

i know, i'll compromise if Intel gives me a good HEDT and i have spare PCI-E lanes.
I'll get an ARC just to dedicate the ARK since its a 4k monitor which i won't be using for gaming, but more productivity stuff....
Then i can say i have the ARKs to rule them all.
 
Last edited:
  • Wow
Reactions: Ajay

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
mmmm... but a 55" curved which can do portrait, has no possibilities of disappointing me, especially if its MLED or QD-OLED.


View attachment 64585

look at that monitor... you can't say it won't put a smile on your face...

Intel's Arc.... err... most likely if its a smile, its an upside down one.

i know, i'll compromise if Intel gives me a good HEDT and i have spare PCI-E lanes.
I'll get an ARC just to dedicate the ARK since its a 4k monitor which i won't be using for gaming, but more productivity stuff....
Then i can say i have the ARKs to rule them all.

Can't fool me. That's a TV.
 
  • Like
Reactions: gdansk

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,846
3,190
126
Can't fool me. That's a TV.

4k 240hz + HDR1000 + 1000R curve + stand which rotates from portrait to landscape.

I think due to the curve the height overall is the same as my 32inch.

I don't think you can get a TV with a 1000R curve.
 

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
4k 240hz + HDR1000 + 1000R curve + stand which rotates from portrait to landscape.

I think due to the curve the height overall is the same as my 32inch.

I don't think you can get a TV with a 1000R curve.

They set the radius on the curve forming machine to the wrong value, made thousands of them by accident and decided to call it a monitor. That, my friend, is a TV. They take a flat TV and run it through the curve forming machine to create the curved shape.
 

Aapje

Golden Member
Mar 21, 2022
1,385
1,865
106
They set the radius on the curve forming machine to the wrong value, made thousands of them by accident and decided to call it a monitor. That, my friend, is a TV. They take a flat TV and run it through the curve forming machine to create the curved shape.
Exactly, they use a scaled up version of this machine:
 

Aapje

Golden Member
Mar 21, 2022
1,385
1,865
106
Leaked time spy extreme bench... RTX 4090, TSE >19000
(3090 around 10000)
with that sort of performance they are going to milk the hell out of it at release.

Nvidia stressed that the MSRP reductions are just temporary, so I expect them to want to price the top end of the 4000 series at the previous prices, so 4090 Ti = $2000, 4090 = $1500 and 3080 Ti at $1200.

If AMD delivers on expectations, this may not be feasible though. And the 2nd hand market may force them to be very competitive on the cheaper cards, so the gap would then become very big between tiers.
 

maddie

Diamond Member
Jul 18, 2010
4,746
4,689
136
Nvidia stressed that the MSRP reductions are just temporary, so I expect them to want to price the top end of the 4000 series at the previous prices, so 4090 Ti = $2000, 4090 = $1500 and 3080 Ti at $1200.

If AMD delivers on expectations, this may not be feasible though. And the 2nd hand market may force them to be very competitive on the cheaper cards, so the gap would then become very big between tiers.
I'll just state it outright. If Nvidia thinks they're going to be able to charge prices higher or probably even equal for the next generation, they're smoking some serious stuff. All indicators are pointing to an economic decline. Important to note is that this is happening in addition to the potential avalanche of used mining cards, which will only add to the downward pricing pressure. What they want and what they'll get are going to be very different.