• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[Rumor, Tweaktown] AMD to launch next-gen Navi graphics cards at E3

Page 81 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Hmm, I'm curious what RX 5950 XT is 😀

Yeah, but obviously they are just registering everything they can think of ahead of time. But typically the xx50 nomenclature is for the PRO cards, and the 70 is for the XT version. So 50 XT just makes me scratch my head.
 
Yeah, but obviously they are just registering everything they can think of ahead of time. But typically the xx50 nomenclature is for the PRO cards, and the 70 is for the XT version. So 50 XT just makes me scratch my head.
I would assume it fits with the Ryzen 3950 CPU nomenclature, as the absolute top end segment (I'm thinking TR3 may introduce a new naming scheme, its getting crowded in the x9xx number space).
 
We will probably see the greater amount of Navi's uArch efficiency in that segment than we did in RX 5700.
If the clocks will be the same as Navi 10, don't count for better efficiency 😉. Its N7 process.

I personally expect 20 CU chip, that is close to GTX 1660 Ti performance, uses 90-125W of power while having 160 mm2 die in size.
 
That was my point, I think the clocks will be lower and we will see much better perf/watt as its further down the bell curve.
No chances 😉.

It may happen with RX 5600 XT and 5600, but if 5650XT are going to be real chips, they will be clocked to hell, for price margin, competing with GTX 1660 Ti. I can easily see GPU that is between GTX 1660 and 1660 Ti in performance, and costing $249.
 
$250 gpu with just 20CU? what happened with the gpu market for god sake, that should be the next gen APU or just a little more, not a $250 card. Concidentaly, thats the only way to justify that a 40CU gpu is $450, charging crazy amounts for something that should be entry level.
 
I personally expect 20 CU chip, that is close to GTX 1660 Ti performance, uses 90-125W of power while having 160 mm2 die in size.

If Navi 40 CU (2560 cores) equals GeForce RTX 2070 (2304), then Navi 20 CU (1280) wouldn't equal GeForce GTX 1660 Ti (1536), or even 1660 (1408). Navi 20 CU would be somewhere between GeForce GTX 1650 [Ti] (896 or 1024) and 1660.
 
If Navi 40 CU (2560 cores) equals GeForce RTX 2070 (2304), then Navi 20 CU (1280) wouldn't equal GeForce GTX 1660 Ti (1536), or even 1660 (1408). Navi 20 CU would be somewhere between GeForce GTX 1650 [Ti] (896 or 1024) and 1660.
Oh, so we, before any reviews are released assume that RX 5700 XT is slower, or not faster than RTX 2070?

I would ask then, why Nvidia is releasing 2560 ALU GPu competing directly with RX 5700 XT, heh?
 
Oh, so we, before any reviews are released assume that RX 5700 XT is slower, or not faster than RTX 2070?
He was giving an "If then equals" statement about the relative performance of a 20CU Navi vs 1660, not a disagreement with the concept of 5700XT = 2070.

A quick scan of this Youtube video seems to indicate 1660 Ti is about 75% of 2070 performance:


Assuming same clockspeeds then, 1660 regular is about 69% of 2070.
 
Last edited:
He was giving an "If then equals" statement about the relative performance of a 20CU Navi vs 1660, not a disagreement with the concept of 5700XT = 2070.

A quick scan of this Youtube video seems to indicate 1660 Ti is about 75% of 2070 performance:


Assuming same clockspeeds then, 1660 regular is about 69% of 2070.
Why the same scaling cannot appear with AMD GPUs?

Its funny that NOBODY questioned that Nvidia GPU performance did not scale with ALU counts, but everybody was happy to immediately say that it WILL scale with AMD GPUs, and 1280 ALU GPU will be exactly 50% of 2560 ALU GPU.
 
Why the same scaling cannot appear with AMD GPUs?

Its funny that NOBODY questioned that Nvidia GPU performance did not scale with ALU counts, but everybody was happy to immediately say that it WILL scale with AMD GPUs, and 1280 ALU GPU will be exactly 50% of 2560 ALU GPU.
I would have to admit I'm not overly familiar with nVidia scaling anymore, my last NV card was the 9600 GT, which I bought in 2008 before replacing it with a HD 5700 in 2009 (the irony that model numbers are rolling back isn't lost on me).
 
I would have to admit I'm not overly familiar with nVidia scaling anymore, my last NV card was the 9600 GT, which I bought in 2008 before replacing it with a HD 5700 in 2009 (the irony that model numbers are rolling back isn't lost on me).
If you know the ALU counts for those GPUs, you do not have to be aware of Nvidia GPU scaling to see that something is off with Turing GPUs. And the higher tier you go, the worse performance scaling you get. Similar situation to 14 nm AMD GPUs.
 
If you know the ALU counts for those GPUs, you do not have to be aware of Nvidia GPU scaling to see that something is off with Turing GPUs. And the higher tier you go, the worse performance scaling you get. Similar situation to 14 nm AMD GPUs.
I have to wonder if this is intrinsic to upper limits of parallilisation (Amdahl), or if it is rasterisation specific.

The predication of the raster 3D graphics acceleration paradigm on fixed function units has always struck me as something that doesn't lend itself well to scaling vs fully programmable (ie compute) units, though perhaps that is an unavoidable trade off made for the sake of efficiency in power, drivers and games coding.
 
Why should I care about that contrived nonsense, involving a Vega SKU that never existed? I prefer to base my analysis on the hard numbers given elsewhere in the presentation. The "typical board power" for the RX 5700 XT is specifically listed as 225W. And if we can take the performance numbers provided at face value, the RX 5700 XT averages about 11% higher performance than the RTX 2070. We know from independent testing that (regardless of what Nvidia claims) the actual TBP of RTX 2070 Founders Edition is 200W. That means that if perf/watt of Navi was equivalent to Turing, then RX 5700 XT should be a 222W card. Since it's actually a few watts higher, this means that Navi has slightly worse perf/watt than Turing. If they were on the same node, that negligible difference would be fine, but Navi can barely keep up in perf/watt despite a full node advantage. That's the problem.

Tell me, why should we care about a node advantage in terms of us being enthusiasts that use these cards to play video games on? Had AMD or NVIDIA not told you one card was 7nm and one card was 12nm, you would have no way of even knowing this. Where is the problem? We are talking about +-10% in power efficiency.....

NVIDIA isn't even going to release 7nm cards until some time in 2020, so as of right now they don't exist. Once 5700XT hits the market that is simply the best AMD can do in efficiency (among other factors), and NVIDIA's 2070 will be the best that it can do.
 
$250 gpu with just 20CU? what happened with the gpu market for god sake, that should be the next gen APU or just a little more, not a $250 card. Concidentaly, thats the only way to justify that a 40CU gpu is $450, charging crazy amounts for something that should be entry level.

Why does the number of CU's matter? Thats like being annoyed that a CPU with X number of transistors cost Y amount. A single CU in Navi does not equal a CU in Polaris, or Tonga, or Hawaii, or Tahiti, etc etc. Overall performance is all that should matter.
 
Why the same scaling cannot appear with AMD GPUs?

Its funny that NOBODY questioned that Nvidia GPU performance did not scale with ALU counts, but everybody was happy to immediately say that it WILL scale with AMD GPUs, and 1280 ALU GPU will be exactly 50% of 2560 ALU GPU.

Nvidia's GPU performance does scale nicely with ALU counts:
relative-performance_1920-1080.pngrelative-performance_2560-1440.png
1660ti is between 70-72% of RTX2070 and 83-84% of RTX 2060 with 66% and 80% ALUs of each respectively, given that the 1660ti in this comparo is at 1770mhz boost clock vs 1620mhz for 2070 and 1680mhz for 2060, that tracks very closely with what the ALU counts say they would perform relative to each other.
 
Status
Not open for further replies.
Back
Top