Glo.
Diamond Member
- Apr 25, 2015
- 4,719
- 3,357
- 136
130W for N23, full die.RTX 3060 has 170W TBP, Navi14 has 130W so N23 should be what TBP in your opinion?
130W for N23, full die.RTX 3060 has 170W TBP, Navi14 has 130W so N23 should be what TBP in your opinion?
Actually, target it a little higher, than just Navi 10130W TBP? The reference card + cooler is pretty small, so It's not out of question.
I have to wonder about performance.
RX5500XT StriX 8GB has 22CU, 32Rops, 128bit GDDR6 and 1834Mhz on average and that's 5.2TFlops.
Navi23 should have 32CU, 32-64Rops?, 64MB IC?, 128bit GDDR6 and with 2200Mhz on average It would mean 9TFlops or 73% higher.
I think It should perform somewhere between RX5600XT - RX5700.
With a reasonable price($199-239) It could be a very good card for 1080p.
On the other hand If this chip is really ~240mm2 and even with higher density while Navi10 is 158mm2, then I must say I am not impressed and once more I have to question If Infinity cache is really worth the size It occupies on a GPU.
Control is the game I really wish would implement AMD's analog to DLSS (I hope they do as they have a next-gen console version incoming with RT support):A few games use it very well, but others don't. I played through Control and it makes a big difference there. DLSS is needed with RT, and even with a 3090 I had to play at 1080p to get 80+fps consistently, with 2560x1440 running at 45-60fps. The image looks soft and has sampling noise as mentioned earlier, but much nicer than playing at 4K without RT. I didn't care about RT/DLSS when buying the card but now think it's an important feature even today.
Well that's assuming that N22 requires additional memory bandwidth. Given that they have half the CU count but 3/4s the GDDR6 memory bandwidth (IC amount is still not confirmed) I would hazard a guess and say they probably don't.What is the reason they don't keep the 256-bit bus coupled with 8Gb memory for the midrange boards? Wouldn't it give better performance for 1080p cards in most situations, instead of 192-bit + 12Gb?
They probably want 12GB of memory. and to offset die size increases caused by infinity cache by having a simpler memory controllerWhat is the reason they don't keep the 256-bit bus coupled with 8Gb memory for the midrange boards? Wouldn't it give better performance for 1080p cards in most situations, instead of 192-bit + 12Gb?
Cost and product differentiation. Additional MCs and cache take up more die space, which increases production costs. The extra VRAM capacity (16 vs. 12 GB) won't matter at 1080p and (most likely) 1440p, and it seems doubtful that bandwidth or cache size would be a bottleneck at 1080p either.What is the reason they don't keep the 256-bit bus coupled with 8Gb memory for the midrange boards? Wouldn't it give better performance for 1080p cards in most situations, instead of 192-bit + 12Gb?
I won't say any more than it's not dual-GPU nor is it anything RDNA3 related.Interesting dual-GPU appeared in AOtS benchmark DB
![]()
AMD Nashira Summit GPU spotted in AotS database - VideoCardz.com
AMD mysterious GPU appears Nashira Summit and Nashira Point AMD is preparing a new GPU with the mysterious name of Nashira Summit. This is not the first time we are hearing about Nashira. Earlier this month Nashira Point has been spotted at the USB certification office. Although both codenames...videocardz.com
I suggested they cut the memory to 8Gb, saving 4gb to reduce cost. But then at a higher bandwidth.Cost and product differentiation. Additional MCs and cache take up more die space, which increases production costs. The extra VRAM capacity (16 vs. 12 GB) won't matter at 1080p and (most likely) 1440p, and it seems doubtful that bandwidth or cache size would be a bottleneck at 1080p either.
The only two ways to increase bandwidth in a traditional sense are to run a wider memory bus which increases die size (and cost) or to use memory chips which clock higher, and also cost more.I suggested they cut the memory to 8Gb, saving 4gb to reduce cost. But then at a higher bandwidth.
The 8gb could run on a 256bit bus just like the 6800 cards, and if they wanted to save a little more they could opt for some slower gddr6. No one would wonder why the midrange cards has less memory than the high end, it is obviously to differentiate the tiers and to cut costs.The only two ways to increase bandwidth in a traditional sense are to run a wider memory bus which increases die size (and cost) or to use memory chips which clock higher, and also cost more.
Where do the cost savings come from here? The 8 GB cards will have a smaller bus, but would need a 50% clock boost just to have the same bandwidth as the cards with the wider bus and 12 GB of VRAM. There isn't any VRAM with that much headroom to tap into. Even the GDDR6X that Nvidia is using in their top-end cards isn't enough of a performance boost to make that feasible.
The only way your solution works out is if the card actually increased the size of the bus, but used lower capacity (1 GB vs 2 GB) memory chips. But once again, that increases the die size and everyone would wonder why the card is only being sold with 8 GB of memory instead of 16 GB.
I don't think the cache size is related to memory controllers. I think they could reduce it to 64Mb and still have 256 bit memory controller. But I also think the engineers at AMD probably has a better understanding of what will give the best results, compared to my speculationsThe midrange cards have half the CUs, so they don't need the same bandwidth considering they still have a lot of Infinity Cache to go along with the 192-bit bus. I don't really know to what degree the amount of Infinity Cache is tied to the number of memory controllers, but if it is then they'd wind up with 128 MB of that as well. The extra memory controllers and cache is going to cost a lot more than you'd save using 6x 2 GB memory chips over 8x 1 GB chips.
If N21 with 80CU has only 128MB IC and 256bit memory controller then N22 with only 40CU and 64MB IC doesn't need 256bit, even 128bit should be good enough.I don't think the cache size is related to memory controllers. I think they could reduce it to 64Mb and still have 256 bit memory controller. But I also think the engineers at AMD probably has a better understanding of what will give the best results, compared to my speculations![]()
N22 has 96 MB of Infinity Cache.If N21 with 80CU has only 128MB IC and 256bit memory controller then N22 with only 40CU and 64MB IC doesn't need 256bit, even 128bit should be good enough.
And a 192-bit bus. It's got 3/4 of the IC and bandwidth, but only half of the CUs. If clocks were the same, then each N22 CU in theory gets 50% more bandwidth over N21.N22 has 96 MB of Infinity Cache.
It was meant just as an example why there is no need for 256bit memory controller .N22 has 96 MB of Infinity Cache.
I don't understand why do you compare Navi 22 against Navi 10 when we have Navi 21.I don't know if that necessarily makes the setup they've used overkill for Navi 22 though. Navi 10 did have a 256-bit memory bus so it's obvious that AMD needs enough infinity cache to compensate for that. If they wanted to do it through memory clock speed alone, they'd need VRAM that's clocked 33% faster than what the 5700 XT uses. Navi 21 is using faster memory, but it's only about 15% faster so not enough to close that gap alone. AMD could also be stuck using the older, slower VRAM that Navi 10 used simply due to supply constraints as well, but in either case they need something to pick up a little bit of the slack.
The additional capacity is likely as a result of consoles moving to 16 GB of available memory. Obviously they split that between the CPU and GPU, but 10 - 12 GB is going to become the new norm over time. If someone buys one of these cards with the intention of holding on to it for five years, I suspect that's when we'll see a lot of titles where 8 GB isn't good enough, particularly at resolutions above 1080p. If you think of Navi 22 as a 1080p card, then yes the extra 32 MB of infinity cache doesn't get you much compared to what you can get with only 64 MB, but these are going to be positioned as 1440p cards and I think that if the clock speeds wind up being as good as they were with Navi 21, that they could also serve as an entry-level 4K card in much the same way that the 3060 Ti can pull an acceptable average frame rate in many titles at that resolution.
The 6700 XT is expected to boost higher than Navi 21. Even an AMD slide shows performance per clk starts to slow down from 2200MHz. Imo 6700 XT absolutely needs a 192-bit bus + 96MB L3 cache if its boost clock is really 2500MHz. It'll be just as good as 3060 Ti at best though.I don't understand why do you compare Navi 22 against Navi 10 when we have Navi 21.
Full Navi 21 has 2x as much CU at higher clockspeed than Navi 10, yet It has the same bus width of 256bit, the only difference is faster 16GHz GDDR6 instead of 14GHz and 128MB IC.
On the other hand Navi 22 has 1/2 of CU and 3/4 of bandwidth(192bit 16GHz) and IC(96MB).
So either N22 has an overkill setup even If they use only 14GHz GDDR6 or 256bit + 128MB is not enough for N21, which is unlikely considering It is fastest at 4K against navi 10.
It will be interesting comparing N23(32CU, 64ROPs, 64MB IC, 128bit GDDR6) against N22(40CU, 96ROPs, 96MB IC, 192bit GDDR6).