[Rumor, Tweaktown] AMD to launch next-gen Navi graphics cards at E3

Page 44 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Glo.

Diamond Member
Apr 25, 2015
5,734
4,611
136
Has anyone heard anything about the Dali APU that was on their roadmaps?

I was kinda hoping to build a SFF system around a 7nm Zen 2/Navi low power chip in the near future, but rumblings say it went poof along with GlobalFoundries 7nm process.
It did not went "poof".

Just wait.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,159
1,102
136
https://forum.beyond3d.com/posts/2072062/

"Alright, got some info about Navi, don't ask about the source, but it's reliable as hell, and I trust it implicitly.

The highest SKU launching will be named RX 5700 XT, 40CU, 9.5TFLOPS, 1900MHz max clocks, with 1750MHz being the typical gaming clock. Power delivery is through 2X 6pin connectors."

https://forum.beyond3d.com/posts/2072069/

"It's legit. It appears Navi indeed sacrificed compute to gain more pixel pushing power, just like Digital Foundry predicted/anticipated. A Vega 64 is 12.5 TFLOPS, yet an RX 5700 is 8.5 TFLOPS at typical gaming clocks, and it's faster than a Vega 64"

The problem is that Radeon VII is only 18% faster than Vega 64 and 31% faster than Vega 56 on benchmark averages real world.
https://gpu.userbenchmark.com/Compare/AMD-RX-Vega-64-vs-AMD-Radeon-VII/3933vs4035

So where is this 15-20% improvement over Vega coming from? That would mean it is equal to Radeon VII which is not really a flagship other than the amount video memory is packs.
 

Glo.

Diamond Member
Apr 25, 2015
5,734
4,611
136

Hans Gruber

Platinum Member
Dec 23, 2006
2,159
1,102
136
Why not just wait for tomorrow, eh?
I will. I want to see what the Navi situation is all about. I have no loyalty to either Nvidia or AMD in graphics. I need a new card to replace my 970 which still has legs to it.
 

JasonLD

Senior member
Aug 22, 2017
485
445
136
The problem is that Radeon VII is only 18% faster than Vega 64 and 31% faster than Vega 56 on benchmark averages real world.
https://gpu.userbenchmark.com/Compare/AMD-RX-Vega-64-vs-AMD-Radeon-VII/3933vs4035

So where is this 15-20% improvement over Vega coming from? That would mean it is equal to Radeon VII which is not really a flagship other than the amount video memory is packs.

probably comparing CU vs CU standpoint, not the Navi vs Vega 64 in terms of overall performance. We have already seen how Navi would perform(which should be within 5% of 2070 for most titles)
 

jpiniero

Lifer
Oct 1, 2010
14,698
5,329
136
Microsoft announced they are releasing Project Scarlett at the end of 2020... has "4X more performance" than the XBX and (yes) claims it has hardware accelerated RT support.
 
Mar 11, 2004
23,102
5,581
146
It did not went "poof".

Just wait.

I'm guessing that will be their big CES 2020 consumer news.

The problem is that Radeon VII is only 18% faster than Vega 64 and 31% faster than Vega 56 on benchmark averages real world.
https://gpu.userbenchmark.com/Compare/AMD-RX-Vega-64-vs-AMD-Radeon-VII/3933vs4035

So where is this 15-20% improvement over Vega coming from? That would mean it is equal to Radeon VII which is not really a flagship other than the amount video memory is packs.

Radeon VII also has a lot of compute focused aspects to its design. Navi is likely going to be more streamlined for gaming performance. That doesn't mean it won't have compute capability (it'll likely exceed Polaris, and might equal Vega 64). But GCN was fairly compute heavy (which is why when leveraging such features like Async Compute, RPM, and others, it would actually live up more to its potential).

probably comparing CU vs CU standpoint, not the Navi vs Vega 64 in terms of overall performance. We have already seen how Navi would perform(which should be within 5% of 2070 for most titles)

I don't see any reason why it couldn't exceed Vega 64 gaming performance. Vega 64 wasn't even 2x Polaris, and Navi should be more in line with 2x Polaris.

And I wouldn't say we know that it'll perform like that. AMD's stuff tends to gain performance a decent bit over time (Polaris for instance started out at around GTX 970 level but think it ended up beating GTX 980 consistently). Which I think the 2070 is like a good 5-10% faster than Vega 64 anyway. There's also ways they can game things as well.

Microsoft announced they are releasing Project Scarlett at the end of 2020... has "4X more performance" than the XBX and (yes) claims it has hardware accelerated RT support.

I'd guess the theoretical performance is closer to 2x, with it being about 12TF to the One X's ~6TF. Which, I could see it offering 4x the framerates when doing native 4K rendering at higher graphics levels (maybe with some hybrid ray-traced lighting). And there's several areas where it'll likely be more than 4x (CPU performance, and they've already said loading - which is basically storage speeds, should be 40x). Memory bandwidth and GPU performance should be about double though I expect.
 
Mar 11, 2004
23,102
5,581
146
I would assume "game clock" likely means max sustained clock speed (so speed it can maintain without throttling). Or possibly just them picking the optimal clock for power and thermals such that it offers the best tradeoff of performance for those. Which I think that's smart and hardware reviewers should have been pushing for such a standard as boost clocks has made hardware reviews complex and difficult to really tell (its not just GPUs either).

Because previous versions of these games were made with that engine and changing engine would imply a huge rewrite of the base code they else could reuse. Not going to happen.

That's their reasoning, but when doing that leads to them releasing ever buggier games, well I hope people stop buying Bethesda's games because they need a reality check of why its stupid to stick with what was already considered a poor game engine and pushing out buggy games that need mods just to fix the game in the first place. Frankly, I'd wager that codebase is exactly the issue and could really stand to be reworked. Its not like its leading to short game development times. It'll be probably 10 years from Skyrim before we get another proper Elder Scrolls. Fallout 3 to 4 was 7. Fallout 76 was only 3 but it was even buggier than their previous games, and was lacking as a game as well.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
The highest SKU launching will be named RX 5700 XT, 40CU, 9.5TFLOPS, 1900MHz max clocks, with 1750MHz being the typical gaming clock. Power delivery is through 2X 6pin connectors."

So they're going to push the clocks to the absolute limits again, completely ignoring perf/watt. Just great.
(And why 2x6-pin instead of 1x8-pin? Both have the same power limit.)

The only way this would be excusable is if the RX 5700 non-XT comes in at a decent TDP (150W or less).

RTX 2080 has a real-world TDP of 225W. That's on 12nm (optimized 16FF+). With a node advantage, AMD should be able to do significantly better if their new architecture is any good. Otherwise, they're going to be caught with their pants down again when Nvidia finally moves to 7nm.
 
Last edited:

itsmydamnation

Platinum Member
Feb 6, 2011
2,808
3,274
136
So they're going to push the clocks to the absolute limits again, completely ignoring perf/watt. Just great.
(And why 2x6-pin instead of 1x8-pin? Both have the same power limit.)

The only way this would be excusable is if the RX 5700 non-XT comes in at a decent TDP (150W or less).

RTX 2080 has a real-world TDP of 225W. That's on 12nm (optimized 16FF+). With a node advantage, AMD should be able to do significantly better if their new architecture is any good. Otherwise, they're going to be caught with their pants down again when Nvidia finally moves to 7nm.
blah blah blah boring.........
 

exquisitechar

Senior member
Apr 18, 2017
657
872
136
Seems like 40 CU Navi is just as nice as I had predicted. That's one big improvement over Vega.

Can't wait for the big, >64 CU Navi. :)
Microsoft announced they are releasing Project Scarlett at the end of 2020... has "4X more performance" than the XBX and (yes) claims it has hardware accelerated RT support.
Next gen consoles will have some great hardware at a good price.
 

prtskg

Senior member
Oct 26, 2015
261
94
101
So they're going to push the clocks to the absolute limits again, completely ignoring perf/watt. Just great.
(And why 2x6-pin instead of 1x8-pin? Both have the same power limit.)

The only way this would be excusable is if the RX 5700 non-XT comes in at a decent TDP (150W or less).

RTX 2080 has a real-world TDP of 225W. That's on 12nm (optimized 16FF+). With a node advantage, AMD should be able to do significantly better if their new architecture is any good. Otherwise, they're going to be caught with their pants down again when Nvidia finally moves to 7nm.
How much do you expect in a single generation?
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,808
3,274
136
So what price do we think? im praying for $299 USD thats 70 more then RX480 launch that should easily account for 7nm and GDDR6 extra premium relative to the same point in time in 2016 . Same price as 4870 launch :)

399 would be stupid, NV can react to that in a heart beat, both more performance and lower prices.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,159
1,102
136
So what price do we think? im praying for $299 USD thats 70 more then RX480 launch that should easily account for 7nm and GDDR6 extra premium relative to the same point in time in 2016 . Same price as 4870 launch :)

399 would be stupid, NV can react to that in a heart beat, both more performance and lower prices.

Let's not forget that Nvidia has been lowering the prices on their video cards recently. Demand is flat. The bitcoin boom is over and gone. People forget that. Fortunately both AMD and Nvidia are publicly traded companies. Investors expect to see growth in revenues. The best way to do that is to lower prices.

I thought the RX480 pre bitcoin bubble launch price was $199?
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,808
3,274
136
Let's not forget that Nvidia has been lowering the prices on their video cards recently. Demand is flat. The bitcoin boom is over and gone. People forget that. Fortunately both AMD and Nvidia are publicly traded companies. Investors expect to see growth in revenues. The best way to do that is to lower prices.

I thought the RX480 pre bitcoin bubble launch price was $199?
that was 4GB not 8GB
 

Glo.

Diamond Member
Apr 25, 2015
5,734
4,611
136
So they're going to push the clocks to the absolute limits again, completely ignoring perf/watt. Just great.
(And why 2x6-pin instead of 1x8-pin? Both have the same power limit.)

The only way this would be excusable is if the RX 5700 non-XT comes in at a decent TDP (150W or less).

RTX 2080 has a real-world TDP of 225W. That's on 12nm (optimized 16FF+). With a node advantage, AMD should be able to do significantly better if their new architecture is any good. Otherwise, they're going to be caught with their pants down again when Nvidia finally moves to 7nm.
Why don't you wait for the keynote, before you say anything, eh?

So far it is pretty clear. People expecxt from AMD's new GPUs miracles.

P.S. What makes you believe N7 process is good for GPUs, eh?
 
Last edited:

soresu

Platinum Member
Dec 19, 2014
2,727
1,931
136
I'd guess the theoretical performance is closer to 2x, with it being about 12TF to the One X's ~6TF. Which, I could see it offering 4x the framerates when doing native 4K rendering at higher graphics levels (maybe with some hybrid ray-traced lighting). And there's several areas where it'll likely be more than 4x (CPU performance, and they've already said loading - which is basically storage speeds, should be 40x). Memory bandwidth and GPU performance should be about double though I expect.
They could easily be talking FP16 - considering that unlike PS4 Pro, the XBX chip could not do double rate FP16, so the PS4 pro chip has 8.4 TFLOPS FP16, whereas XBX has just 6 TFLOPS FP16, same as its FP32 compute.
It wouldn't surprise me if Microsoft meant it has 24 TFLOPS FP16, 4x XBX FP16 - which makes far more sense economically for MS.
They wouldn't be the first console maker to get people assuming an upcoming console was better than it was due to ambiguous statements - anyone remember Nintendo implying that the Wii U CPU was POWER7 by saying it used the WATSON architecture?
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,159
1,102
136
They could easily be talking FP16 - considering that unlike PS4 Pro, the XBX chip could not do double rate FP16, so the PS4 pro chip has 8.4 TFLOPS FP16, whereas XBX has just 6 TFLOPS FP16, same as its FP32 compute.
It wouldn't surprise me if Microsoft meant it has 24 TFLOPS FP16, 4x XBX FP16 - which makes far more sense economically for MS.
They wouldn't be the first console maker to get people assuming an upcoming console was better than it was due to ambiguous statements - anyone remember Nintendo implying that the Wii U CPU was POWER7 by saying it used the WATSON architecture?

I looked up when Xbox One was released. Oct 2012. They announced at E3 of 2012 and released in Oct. 2012. The next Xbox comes out in 2020 after announcing @ E3 in 2019. Something is wrong here. I know the Xbox One X is upgraded but the Xbox One should have been replaced 2 or 3 years ago.
 

soresu

Platinum Member
Dec 19, 2014
2,727
1,931
136
I looked up when Xbox One was released. Oct 2012. They announced at E3 of 2012 and released in Oct. 2012. The next Xbox comes out in 2020 after announcing @ E3 in 2019. Something is wrong here. I know the Xbox One X is upgraded but the Xbox One should have been replaced 2 or 3 years ago.
What site gave this information? The Wii U was released in November 2012, perhaps the site confused the two.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Looking at the press deck leaks and taking into account AMD's own vague comparisons, I am doubling-down now on Vega64 / GTX 1080 performance at ~185-200 watts.

It's going to have to be $299 in the face of RTX 2060's going for $330 regularly.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Looking at the press deck leaks and taking into account AMD's own vague comparisons, I am doubling-down now on Vega64 / GTX 1080 performance at ~185-200 watts.

It's going to have to be $299 in the face of RTX 2060's going for $330 regularly.

So you are saying it has to be $30 cheaper than a what will most likely end up being a slower card? If AMD didn't think they could come close to 2070, they would not have used that as their benchmark publicly. The wattage will most likely be close. The price may even be close, but the performance should be better than a 2060.

We find out in 2 hours 5 hours.
 
Last edited:
Status
Not open for further replies.