Yeah, it's been a confusing ride. Initially someone from CDPR said RT wouldn't be supported on AMD cards. Then they gave an official statement saying it would. Now they released another official statement saying not at launch. Either way, unlike the first reports, it will be supported on AMD cards, just not at launch.Nah, Cyberpunk will not get RT on RDNA2 GPUs at launch, just like Godfall won't get RT on NVIDIA GPUs at launch. CDPR gave an official statement to Computerbase.
![]()
Cyberpunk 2077: Raytracing zum Start nur mit Nvidia-Grafikkarten
Das Entwicklerteam von CD Projekt hat die Systemanforderungen für das kommende Action-Rollenspiel Cyberpunk 2077 aktualisiert.www.computerbase.de
In Poland there will be promotion on wednesday related to Black Week sales, and I can snag a Core i3 10100F for 45€.I've been thinking about going to a RX5600 XT. But if there will be a RX6600 XT, I am going to wait. Been with Nvidia for FAR too long now. 6 years is enough.
Considered going with a Ryzen 5600X and a RX5600 XT.
What Patrick Schur wrote is not the whole card(TBP), but GPU+VRAM(TGP), the same situation was with what he leaked about Big Navi. So expect comparable TBP to Navi10, unless these values are not for reference design but only OC AIB models.RX 6700 XT incoming ?
AMD Radeon RX 6700 XT: Navi 22 GPU + 12GB GDDR6 = mid-range killer
We have a new tease from Patrick Schur, who tweeted some details on the Navi 22 GPU power targets -- which should see the new Radeon RX 6700 series arriving as an amazing value for money set of cards. We're looking at around 40 Compute Units, and the Navi 22 XT GPU powering the higher-end Radeon RX 6700 XT.
AMD will keep the core count on the new Radeon RX 6700 XT the same as the Radeon RX 5700 XT, but it will greatly benefit from power efficiency, higher clock speeds, and brute performance -- as well as a purported 12GB of RAM -- up from 8GB on the RX 5700 XT.
We are looking at the Navi 22 XT using 186-211W of power, compared to the 225W on the Navi 10-based Radeon RX 5700 XT. The cut-down Navi 21 XL-based Radeon RX 6700 (non-XT) will reportedly use somewhere between 146-156W which is much less than the 180W used on the Radeon RX 5700.
We are to expect the Radeon RX 6700 XT to ship with a faster 192-bit memory bus, so we can expect 6GB or 12GB of GDDR6 -- with the Radeon RX 6800 and Radeon RX 6800 XT graphics cards both packing 16GB of RAM, then I would expect the Radeon RX 6700 series cards to all pack 12GB.
As for when we'll see AMD launch its new Radeon RX 6700 series cards, it seems like January 2021 is the window -- so expect some huge releases in January 2021 as NVIDIA has its new mid-range GeForce RTX 3060 Ti right around the corner...
Mac OS power tables suggest 2.5 GHz Turbo clock for Navi 22.Based on what we've seen with Navi 21, we'll probably see clock speeds somewhere in the range of 20% - 30% higher, and the inclusion of infinity cache will likely help compensate for the smaller bus width or any memory bottleneck that may materialize as the clocks get pushed higher. I have a sneaking suspicion that Navi 22 is going to be a real monster in 1080p and 1440p due to the infinity cache. 5700XT performance with a 20% boost due to the aforementioned reasons puts it pretty close to 2080 Super performance in general and lets it even match a 2080 Ti in a number of titles at those lower resolutions.
The Navi 22 tables might be more at the limit of what's feasible simply because the TDP will still be lower than Navi 21 with more conservative clocks. Apple tends to prefer their products remain thin over building them to handle cooling anything that's pumping out 300W worth of heat.Mac OS power tables suggest 2.5 GHz Turbo clock for Navi 22.
And looking at how MacOS tables were conservative in terms of clock speeds, for N21 I'd say we're looking at quite higher clock speeds for N22.
As you can see a 23% increase in clockspeed resulted in only 12% better perfomance, so even If N22 will clock 30-35% higher than RX 5700XT, It's still questionable how much performance gain we will see and the increased clockspeed has a very bad effect on the power consumption considering how little performance you gained in return.I asked an owner of RX 6800XT named HEAD from another forum(pctforum.tyden.cz) to test his card at 1800mhz and default clockspeed, here are his findings.
Everything is the average value and the tested game was Control:
Frequency: 1810Mhz(100%) vs 2230Mhz(123%)
Performance: 94.9FPS(100%) vs 105.9FPS(111.6%)
Power consumption: 170W(100%) vs 255W(150%)
Voltage was set to 0.8V, he said It can't be set any lower and I am not sure If the power consumption was for the whole card, I hope It was.
It looks like AMD didn't really lie in the graph about power efficiency at lower clockspeed.
It looks like AMD could have a very good lineup in laptops, finally!
P.S. For comparison, his Undervolting gave him 108FPS with 213W power consumption and the average clockspeed was 2251Mhz.
I know I'm not the only one that is happy that the "Mid-range killer" and "Budget killer" are now looking at $400 USD. It wasn't but a few years ago that those types of cards fell to the RX 480/580 and 1060 GB, all for about half. The 480's and 580's were available for $200 with 4GB or $240 for 8GB. 1060 6GB was about the same.RX 6700 XT incoming ?
AMD Radeon RX 6700 XT: Navi 22 GPU + 12GB GDDR6 = mid-range killer
We have a new tease from Patrick Schur, who tweeted some details on the Navi 22 GPU power targets -- which should see the new Radeon RX 6700 series arriving as an amazing value for money set of cards. We're looking at around 40 Compute Units, and the Navi 22 XT GPU powering the higher-end Radeon RX 6700 XT.
AMD will keep the core count on the new Radeon RX 6700 XT the same as the Radeon RX 5700 XT, but it will greatly benefit from power efficiency, higher clock speeds, and brute performance -- as well as a purported 12GB of RAM -- up from 8GB on the RX 5700 XT.
We are looking at the Navi 22 XT using 186-211W of power, compared to the 225W on the Navi 10-based Radeon RX 5700 XT. The cut-down Navi 21 XL-based Radeon RX 6700 (non-XT) will reportedly use somewhere between 146-156W which is much less than the 180W used on the Radeon RX 5700.
We are to expect the Radeon RX 6700 XT to ship with a faster 192-bit memory bus, so we can expect 6GB or 12GB of GDDR6 -- with the Radeon RX 6800 and Radeon RX 6800 XT graphics cards both packing 16GB of RAM, then I would expect the Radeon RX 6700 series cards to all pack 12GB.
As for when we'll see AMD launch its new Radeon RX 6700 series cards, it seems like January 2021 is the window -- so expect some huge releases in January 2021 as NVIDIA has its new mid-range GeForce RTX 3060 Ti right around the corner...
The RX 5600 XT is a great card that I nearly got. Pretty much on par with an RX 5700. Just consider the games you play and the resolution. The 6GB VRAM might become a factor at some point. Considering how popular 6GB cards are though and if the rumors of the 6GB 5700 XT are true, you should be fine.I've been thinking about going to a RX5600 XT. But if there will be a RX6600 XT, I am going to wait. Been with Nvidia for FAR too long now. 6 years is enough.
Considered going with a Ryzen 5600X and a RX5600 XT.
$199 was only RX480 4GB, RX 480 8GB was $239 and GTX1060 6GB was $249 or FE $299, but you are right that the prices are still much higher, although It's better than It was with Turing.I know I'm not the only one that is happy that the "Mid-range killer" and "Budget killer" are now looking at $400 USD. It wasn't but a few years ago that those types of cards fell to the RX 480/580 and 1060 GB, all for about half. The 480's and 580's were available for $200 with 4GB or $240 for 8GB. 1060 6GB was about the same.
Those were advertised as VR-ready and 1440p-ready cards. I played at 1440p on an RX 480 for a few years. I had to dial a few things back but it was very serviceable. Now, just 3-4 years later, the price to entry is twice as much?
Yeah, the sweetspot for GPUs was around $200-300 for almost 20 years.I know I'm not the only one that is happy that the "Mid-range killer" and "Budget killer" are now looking at $400 USD. It wasn't but a few years ago that those types of cards fell to the RX 480/580 and 1060 GB, all for about half. The 480's and 580's were available for $200 with 4GB or $240 for 8GB. 1060 6GB was about the same.
Those were advertised as VR-ready and 1440p-ready cards. I played at 1440p on an RX 480 for a few years. I had to dial a few things back but it was very serviceable. Now, just 3-4 years later, the price to entry is twice as much?
The RX 5600 XT is a great card that I nearly got. Pretty much on par with an RX 5700. Just consider the games you play and the resolution. The 6GB VRAM might become a factor at some point. Considering how popular 6GB cards are though and if the rumors of the 6GB 5700 XT are true, you should be fine.
I think 400$ today is ~300 in 2010 era and easily 200$ in 2000 era, money has lost a lot of value during these and will loose at least 30-40% in next few years.Yeah, the sweetspot for GPUs was around $200-300 for almost 20 years.
<...>
The 2060 SUPER is arguably the current sweetspot, and that's a 400$ card.
Polaris was only sanely priced early on after its release until the mining boom. Even 470's were shooting up in price to over $300 when a few months before they were getting down to $130 or less with 4 GB models. The mining boom didn't hit Nvidia quite as hard, but they still saw retail prices jump up on a few of their cards.$199 was only RX480 4GB, RX 480 8GB was $239 and GTX1060 6GB was $249 or FE $299, but you are right that the prices are still much higher, although It's better than It was with Turing.
The other reason we saw a lot of stagnation was that TSMC's 16nm stuck around for a while, almost as long as 28nm. Sure, Nvidia used the 12nm node for Turing, but that was essentially just a rebranding and didn't improve the density. Meanwhile AMD only released midrange consumer cards (I'm not counting Radeon VII) on TSMC's 7nm so they weren't moving the needle on absolute performance either. Turing focused more on adding in RT than anything else so while there were some improvements over the Pascal architecture, there wasn't as much focus there as historically was the case.After the release of the GTX 1060 (which was in July 2016) things have really slowed down.
One, before the mining boom hit you could buy 480/470 at deep deep discount, if I remember correctly you could get rx470 4gb for about $120 and rx470 8gb for around $150. Two, it seems that everyone is ignoring that we used to get more performance at the same price with new gen, I know that has not been the case for the past 4 years, but forgive me for not jumping in joy for prices kind of sort of returning back to sanity.$199 was only RX480 4GB, RX 480 8GB was $239 and GTX1060 6GB was $249 or FE $299, but you are right that the prices are still much higher, although It's better than It was with Turing.
The price is still hight, but If that "Budget Killer => RTX 3060Ti" wil cost "only" $399, then the price is not that bad considering It performs as RTX2080 Super originally costing $699.
I checked an old TPU review for RX 480 and there were 3 games out of 16 where It barelly managed 25FPS at 1440p. On the other hand RTX3060Ti which is also aimed at 1440p should manage at least 50FPS based on RTX3070 scores which were over 60FPS even in the most demanding ones tested on TPU in 1440p.
I think RX 6700XT should be somewhere around the level of RTX 3060ti, the number of CU vs SM is comparable 40 vs 38.
If you want to say inflation isn't a good indicator for computer hardware, compare it against the other components of the system.I think 400$ today is ~300 in 2010 era and easily 200$ in 2000 era, money has lost a lot of value during these and will loose at least 30-40% in next few years.
So comparing absolute $ values does not work beyond generation or two. And don't bother with official statistics of inflation either, those tools are meant for potatoes and cars.
1.) What I wrote were prices at the release date and not discount prices.One, before the mining boom hit you could buy 480/470 at deep deep discount, if I remember correctly you could get rx470 4gb for about $120 and rx470 8gb for around $150. Two, it seems that everyone is ignoring that we used to get more performance at the same price with new gen, I know that has not been the case for the past 4 years, but forgive me for not jumping in joy for prices kind of sort of returning back to sanity.
True, those were discount prices, but not only cards in the past were cheaper, they also got steep discounts several times a year. We didn't have any discounts on any of the generation since then, no 10XX discounts, no 20XX discounts, no VEGA discounts, no NAVI discounts.1.) What I wrote were prices at the release date and not discount prices.
2.) I think everyone knows the prices are high compared to what It used to be, but what can you do about It except not buying at all. And I am happy that the prices returned to a more sane levels, even If It's still quite high.
Many of you may dissagree, but If you don't mind playing only at FullHD then you can save a lot of money for a new GPU by not buying the faster ones. For example RX 5500XT is still capable to play every tested game at max settings in FullHD according to TPU review. Control was the most demanding game tested and still managed 36FPS on average, that's not so bad for the weakest RDNA1 card which started selling at $169-199 for the 4-8GB version.
igorslab said:Yeah, there’s a lot of fun, with both cards and especially with the smaller Radeon RX 6800! The MPT only works in a roundabout way, but even so, the cards can be brought to their physically acceptable maximum under normal conditions with air cooling. But that’s enough, because you don’t get the next fastest card in the overtaking process. However, the RX 6800 XT then wins against the GeForce RTX 3080 FE in almost all gaming benchmarks. If anyone needs it…
Yeah the 400 series was starting to see discounts just to clear out inventory for the 500 series which we really just rebranded 400-series cards anyhow so the prices were really low right up until they weren't. However, I don't really agree with the general sentiment that performance has stagnated or that there was this magical era where $300 could get mind-blowing performance that's locked behind $1,000 GPUs due to corporate greed. Consider the GeForce x70 card and what that's been priced at over the generations:One, before the mining boom hit you could buy 480/470 at deep deep discount, if I remember correctly you could get rx470 4gb for about $120 and rx470 8gb for around $150. Two, it seems that everyone is ignoring that we used to get more performance at the same price with new gen, I know that has not been the case for the past 4 years, but forgive me for not jumping in joy for prices kind of sort of returning back to sanity.
If you have time, can you please check the performance in some modern benchmark by setting different frequencies to the GPU, like default, 2000, 1800, 1600, 1400MHz? I already posted what I got from another user and the result was that decreasing clockspeed by ~19% resulted only in 10.5% lower performance. Why do I want It? Because of the mobile versions and It looks interesting.It seems that my card is one of the winners in Silicon Lottery!
I just played a bit with undervolting and I can maintain full 2600MHz/1100MEM at 115% Power with 0.925V instead of default 1.025V(EDIT: something wired going on, as according to GPU-Z and AMD Driver, my vGPU stays at 1.025v no matter where I put slider for voltage, but the card will crash in 3D if I go below 0.9v and performance changes as seen on 3DMark. Auto Undervolting works as expectd and applies lower vGPU, but at stock clocks ...)
Checked with FireStrike and yes, performance is going up as more demanding first test average GPU clock went from 2250MHz to 2350MHz, second test averages 2550MHz.
https://www.3dmark.com/compare/fs/24104371/fs/24104234/fs/24084536
Thread starter | Similar threads | Forum | Replies | Date |
---|---|---|---|---|
![]() |
nVidia 3090Ti reviews thread | Graphics Cards | 111 | |
I | Article Changes Coming in the Video Card Reviewer Industry — HardOCP | Graphics Cards | 44 | |
J | Question 3050 Reviews | Graphics Cards | 147 | |
![]() |
nVidia restricts 3080 12GB reviews | Graphics Cards | 75 | |
T | Question Trustworthy GPU reviews (until AT resumes)? | Graphics Cards | 6 |