True, but if you don't program for it, you get a hit on the extra cache about 60% of the time. I'll take 50% more bandwidth all the time over that. Also, it doesn't scale well to higher resolutions because it improves performance by using data for multiple frames. So, as I understand it (and it seems to show in benchmarks), you get a nice boost when you're at higher FPS, i.e. lower resolutions, but it levels off as you get to higher resolutions and lower FPS (less frames able to share data from the L2 cache). My issue here is I'm not buying a 3070+ or 6800+ class GPU to play anything under 1440p personally. Now a 4080 Ti with a 384 bit memory bus plus some short of L2 cache, and 16GB+ VRAM is pretty exciting to me and I think would be a good 5+ year gaming card; might even be worthy of 1080 Ti type longevity comparisons later down the line.
Perfect example of the infinity cache helping AMD 6800 keep up with a nVidia 3080 at lower resolution, then get absolutely trounced as the resolution scales up.
View attachment 63909
Relatedly, the 3070 with the same 256 bit bandwidth, is essentially neck and neck with the 6800 by the time you get to 4k resolution. Neither one of them has the bandwidth to really perform at that resolution. I think having 50% more memory bandwidth is why the 1080 Ti aged better than the regular 1080 as well; more resources are always more after all.