[Rumor, Tweaktown] AMD to launch next-gen Navi graphics cards at E3

Page 43 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

jpiniero

Lifer
Oct 1, 2010
14,698
5,329
136
Beating fixed function is hard. My guess is that you will see AMD add fixed function hardware for RT, but not this year.
 
  • Like
Reactions: turtile

DrMrLordX

Lifer
Apr 27, 2000
21,710
10,986
136
You are confusing publishers with developers

. . . I guess?

id Tech seems to be landlocked to Bethesda based developers unfortunately

Lame, and kinda stupid. Why not crowd out other dev houses by allowing the licensing of iDTech?


Which Fallout uses iDTech?

There's little point speculating much only 2 days out, the Next Horizon disclosure on Zen 2 was heavy on details of the uArch - I'd expect the one on monday to do the same for Navi/RDNA, though the absolute nitty gritty will likely be saved for HotChips in August.

Yeah, but waiting is boring.
 

soresu

Platinum Member
Dec 19, 2014
2,727
1,931
136
Lame, and kinda stupid. Why not crowd out other dev houses by allowing the licensing of iDTech?
I have no idea to be honest, I know that EA are kinda doing the same thing with Frostbite shared between DICE and Bioware.
From the sounds of it Bioware were quite happy with the Unreal engine, but had to abandon it for Frostbite and rewrite their toolsets to work with it - whether EA forced them as a cost saving measure I don't know, but Frostbite games do seem to come with nasty bugs here and there.
Which Fallout uses iDTech?
My bad, I must have gotten mixed up with a Bethesda announcement that contained Doom 2016 and Fallout.
 
Mar 11, 2004
23,102
5,581
146
You are confusing publishers with developers - though I can understand why, given that today most major developers are hand cuffed to a specific publisher.
id Tech seems to be landlocked to Bethesda based developers unfortunately, which is a shame as Raven used to be able to work magic with their engines. This is why id Tech is only found on Doom, Wolfenstein and Fallout now (Rage 2 uses something else).

As to RT and mGPU, in theory RT is already a highly parallel scalable compute problem - if you can do 1080p with one Vega 56, then 2-4 Navi cards should eat 4K for breakfast, even without FF ray traversal/intersection acceleration. The main problem in that is rasterising each frame, as even pure RT still needs to rasterise each frame to make it viewable, unless you are using a lightfield display that is.

Its worse than that. Even other Bethesda teams don't seem to be using it. They're sticking with their updated Gamebryo engine for Elder Scrolls and Fallout, nevermind how buggy it is (to the point its stopped being "charming" and is more they're shipping almost outright broken games with it with the last Fallout release). Its not like they can seriously argue against using idTech because it'd be buggy as their games are already buggy and seem to be getting worse in that regard.

Which, I get not forcing devs to have to us any certain game engine (I think that caused issues at EA when they were forcing Dice's engine onto everyone from RPGs to racing games), but I really don't get them sticking with buggy game engines over what seems to be one of the better engines.

There's little point speculating much only 2 days out, the Next Horizon disclosure on Zen 2 was heavy on details of the uArch - I'd expect the one on monday to do the same for Navi/RDNA, though the absolute nitty gritty will likely be saved for HotChips in August.

It gave some details but it didn't really tell us all that much. I doubt we even get that much info out of this event. We probably still won't get prices or real details. It'll just be "new arch, GDDR6, next gen!" type of hype. They'll probably demo it matching a 2070 again. And we'll get some token devs going "we've had a relationship with AMD for ___years, they're a great partner, we're soooo interested in the new architecture" type of nonsense.

I would really like to be proven wrong, but I'm not expecting anything but more vaguery (they'll talk up features - like how they talked up RapidPackedMath and stuff - that mean nothing to gamers because there's no way for gamers to actually be able to quantify any tangible improvement from that).

And that's why it doesn't even matter much, as it still comes down to the performance for the price. Its sounding more like I actually won't be upgrading from my RX480 for Navi, at least anytime soon if the pricing rumors are anywhere close to being true. As I doubt there's much fundamental changes (i.e. major performance improvement for the ~$250 market; no revolutionary feature - its not gonna come out and hammer RTX in ray-tracing even on the higher end versions, its not gonna have an Eyefinity bombshell, heck we'll probably just get some minor iterative improvement on the video processing block and video output - I won't even be surprised if AMD doesn't ship Navi with HDMI 2.1 ports; and even though I do think mGPU is part of the development that's going on with AMD GPUs, I'm doubtful we get anything related to that at Navi's release - although they might show mGPU scaling well in some specific game or something like they did with was it Vega?).
 

Guru

Senior member
May 5, 2017
830
361
106
Amazing reveal, great performance gains over Vega, 25% more performance for the same equivalent gpu and 50% power reduction. That is huge, AMD has been behind Nvidia in terms of power consumption, but it seems like this will allow them to beat Nvidia for the first time in power consumption, all while providing 25% more performance over Vega.

If its 10% faster than RTX 2070 and priced at $450 it will be amazing, hopefully their RX 5600 undercuts RTX 2060 as well by $50 and is faster by 10% as well. For $300 having a 10% faster GPU than the RX 2060 and having 8GB GDDR6 is going to be amazing, its probably going to be my next card.
 

beginner99

Diamond Member
Jun 2, 2009
5,213
1,582
136
Its worse than that. Even other Bethesda teams don't seem to be using it. They're sticking with their updated Gamebryo engine for Elder Scrolls and Fallout, nevermind how buggy it is

Because previous versions of these games were made with that engine and changing engine would imply a huge rewrite of the base code they else could reuse. Not going to happen.
 

Glo.

Diamond Member
Apr 25, 2015
5,734
4,611
136
https://forum.beyond3d.com/posts/2072062/

"Alright, got some info about Navi, don't ask about the source, but it's reliable as hell, and I trust it implicitly.

The highest SKU launching will be named RX 5700 XT, 40CU, 9.5TFLOPS, 1900MHz max clocks, with 1750MHz being the typical gaming clock. Power delivery is through 2X 6pin connectors."

https://forum.beyond3d.com/posts/2072069/

"It's legit. It appears Navi indeed sacrificed compute to gain more pixel pushing power, just like Digital Foundry predicted/anticipated. A Vega 64 is 12.5 TFLOPS, yet an RX 5700 is 8.5 TFLOPS at typical gaming clocks, and it's faster than a Vega 64"
 

exquisitechar

Senior member
Apr 18, 2017
657
872
136
https://forum.beyond3d.com/posts/2072062/

"Alright, got some info about Navi, don't ask about the source, but it's reliable as hell, and I trust it implicitly.

The highest SKU launching will be named RX 5700 XT, 40CU, 9.5TFLOPS, 1900MHz max clocks, with 1750MHz being the typical gaming clock. Power delivery is through 2X 6pin connectors."

https://forum.beyond3d.com/posts/2072069/

"It's legit. It appears Navi indeed sacrificed compute to gain more pixel pushing power, just like Digital Foundry predicted/anticipated. A Vega 64 is 12.5 TFLOPS, yet an RX 5700 is 8.5 TFLOPS at typical gaming clocks, and it's faster than a Vega 64"
Hmm, if 8.5 TFLOPS Navi is decently faster than a Vega 64, AMD was underselling Navi's improvements with the 1.25x performance per clock figure.
 

Glo.

Diamond Member
Apr 25, 2015
5,734
4,611
136
Hmm, if 8.5 TFLOPS Navi is decently faster than a Vega 64, AMD was underselling Navi's improvements with the 1.25x performance per clock figure.
AMD claimed that is an average over 30 games at 4K, or something like that.

AMD could've sandbagged the number and undersell the ACTUAL IPC gains.
 

DrMrLordX

Lifer
Apr 27, 2000
21,710
10,986
136
Testing 1 2 3, can't reply to this thread normally.

@Glo.

Anaconda allegedly has 14 12 Tflop Navi next year in a console that probably won't crack $500. Hmm.

Also, VegaFE had a blower . . .
 
Last edited:

exquisitechar

Senior member
Apr 18, 2017
657
872
136
Testing 1 2 3, can't reply to this thread normally.

@Glo.

Anaconda allegedly has 14 Tflop Navi next year in a console that probably won't crack $500. Hmm.

Also, VegaFE had a blower . . .
Where did you get that Anaconda will have a 14TF GPU? From that tweet?

That leak has been proven to be BS because it says the GPU is Arcturus, which is not a console chip codename according to AMD's bridgman.
 

DrMrLordX

Lifer
Apr 27, 2000
21,710
10,986
136
Where did you get that Anaconda will have a 14TF GPU? From that tweet?

That leak has been proven to be BS because it says the GPU is Arcturus, which is not a console chip codename according to AMD's bridgman.

Leak in January from JeuxVideo:

http://www.jeuxvideo.com/news/99358...r-les-specifications-font-leur-apparition.htm

In turn, their source:


No mention of Arcturus. It claims a 12TFlop version of Navi. PS5 will allegedly only have an 8TFlop version of Navi.
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
I like the new 'game clock'(if true), it takes a lot of confusion and guess work out of the narrative. Does nvidia have something similar or is it just somewhere between here and there?
 

soresu

Platinum Member
Dec 19, 2014
2,727
1,931
136
AMD could've sandbagged the number and undersell the ACTUAL IPC gains.
Bear in mind the AMD slide specifically used IPC, and not FPS per clock - IPC sounds more like a compute specific improvement of the uArch (Instructions Per Clock), rather than a game/FPS improvement.
Game Clock - All Core/CU Clock/Boost Clock - Single CU Turbo?
Modern GPU's and shader code are intrinsically parallel in nature, there is no point in a single threaded GPU workload - so unlike with CPU cores, a single CU turbo makes no sense.
On the other hand, a turbo for each individual CU would indeed be an interesting thing, considering that a GPU can have more than one workload other than GFX at once - ie like TrueAudio Next, TressFX, physics, or even ML/AI inference now.
 

soresu

Platinum Member
Dec 19, 2014
2,727
1,931
136
Has anyone heard anything about the Dali APU that was on their roadmaps?

I was kinda hoping to build a SFF system around a 7nm Zen 2/Navi low power chip in the near future, but rumblings say it went poof along with GlobalFoundries 7nm process.
 

Bouowmx

Golden Member
Nov 13, 2016
1,138
550
146
Is Navi going to be a real showing of foundry 7 nm transistor density? Vega 20 was an extremely poor display of density.
 
Status
Not open for further replies.