[Rumor, Tweaktown] AMD to launch next-gen Navi graphics cards at E3

Page 139 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,313
3,177
146
Guys, why are we discussing so much AMD vs Nvidia in this thread? The back and forth thinly veiled trolling needs to stop. AFAIK this thread is about AMD releases on next GPUs, not a bunch of back and forth on the 7970 vs 680. Lets stay on topic please, and not make every thread into AMD vs Nvidia fight.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
It's way too early to argue that Nvidia's implementation of raytracing will become the standard just like how CUDA become the standard for GPGPU programming. CUDA for years had no effective response against it unlike raytracing which will almost certainly see a competing implementation against it in less than 2 years since Nvidia's introduction ...

Also each vendors likely have very different HW implementations of raytracing as well under the hood so don't get the idea that either AMD or Intel will follow the DXR specifications as is without any of their own unique traits. Based off of AMD's patent alone they do hardware accelerated intersection testing in the TMUs and handle the BVH traversal in the shaders. Intel looks to be seriously pushing the idea of programmable traversal shaders from one of their research papers so it's very likely that they're customizing their shader/ISA design to handle this as efficiently as possible. As for Nvidia from extensive analysis, their RT cores do fixed function BVH traversal while the ray intersection tests are done on the tensor/shader cores ...

Intel's possible implementation vs NV's current implementation look to be extremely polar opposites of each other with wildly different performance characteristics depending on the workload design ...

Each vendor's design comes with their own set of strengths/limitations and by no means is Nvidia's implementation is impervious like you seem to imply it is. With AMD's RT implementation, doing customized intersection test shader programs would be a very bad idea since they can't use the intersection units built into their TMUs. Intel's traversal shader concept is good for reducing bandwidth consumption but there is some overhead involved with the additionally generated shader invocations. With Nvidia, ray traversal is totally fixed function so traversal shaders would end up being a bad idea on their hardware since they have to emulate this without being able to use their RT cores ...

Come DXR 2.0 specification and Microsoft chooses to standardize Intel's traversal shaders, it could very well end badly for Nvidia because then they'd be forced to significantly reachitect their HW designs for ray tracing compared to Turing or face a huge performance cliff if god forbid developers decide to write custom ray traversal shader programs ...

A few things here:
DXR is the standard. Hardware vendors are only mapping the API to their hardware. And DXR is for real time graphics. A pure shader compute solution is still to slow for games.
nVidias RT Cores are doing every calculation after launching rays till the hit/miss. This frees the SM from additional work. This is far more advanced than AMDs approach where they are still needing the compute units to calculate certain operations.

And this is the biggest difference. nVidia knew in advanced that Turing is the beginning of real time raytracing in games. So they designed Turing specific for this workload. One the other hand AMD was only interesting in rasterizing. That's the reason why Navi has so many transistors without having the same featureset than for example TU106 (RTX2070).
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
A few things here:
DXR is the standard. Hardware vendors are only mapping the API to their hardware. And DXR is for real time graphics. A pure shader compute solution is still to slow for games.
nVidias RT Cores are doing every calculation after launching rays till the hit/miss. This frees the SM from additional work. This is far more advanced than AMDs approach where they are still needing the compute units to calculate certain operations.

And this is the biggest difference. nVidia knew in advanced that Turing is the beginning of real time raytracing in games. So they designed Turing specific for this workload. One the other hand AMD was only interesting in rasterizing. That's the reason why Navi has so many transistors without having the same featureset than for example TU106 (RTX2070).
You got it so backwards that I wouldn't be surprised if you drove in reverse everywhere.
If anything, it was more of an undercut from Nvidia, because the design ideas and parameters had to be set in stone for the new consoles at least 3 years ago. Don't think for a moment that MS would implement DXR so fast just for a laughable Turing RTX launch with literally no playable games. It was always about the next-gen consoles (easy conclusion in hindsight).

I would like to thank Nvidia for their efforts in providing valuable experience with some parts of the API in the previous months.
 

uzzi38

Platinum Member
Oct 16, 2019
2,747
6,657
146
This is far more advanced than AMDs approach where they are still needing the compute units to calculate certain operations.

AMD currently does not support RTRT in any format, and RDNA2 has dedicated hardware for it. So you're kind of wrong on both accounts. See the aforementioned 'both companies must have been planning this for years' point I brought up as well for good measure.
 
  • Like
Reactions: Glo.

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I'd say they already hit that mark against Intel, alebit with the help of thei process tech foibles.

The R&D spending and engineering focus was only more recently reallocated to GPU efforts after the initial Zen push, so we may just be seeing the train warming up in the station rather than roaring down the track for now.

Their execution against Intel is amazing. And it shows what AMD can do with strong products. Which is why I'm baffled with how they handled Navi. Since Rory Read AMD wanted to move way from being the "value brand" but all their actions afterwards kept them square in the "value brand" and bitmining basically glued them there for a while due to second market sales cannibalizing new.

It's being stuck between a rock and a hard place. AMD had a great product to compete with NV's product stack, and with the price increases now being "accepted" the driver issues are throwing more wrenches into the progression.

One of the reason NV was able to get GK104 where it landed was due to Tahiti having bad drivers, issues that weren't fixed for months. It's hard to get buyers to buy a product at similar price of a competitor if you're getting bad PR from places such as r/AMD for bad drivers. They had to sticky a PSA directly from AMD telling users to disable features they bought Navi specifically for. That makes the price hike harder to swallow for some users/buyers and you can read people just jumping ship.

Ryzen had some hardware/driver at launch issues but the lower price tag made these things easier to accept/ignore.

It's way too early to argue that Nvidia's implementation of raytracing will become the standard just like how CUDA become the standard for GPGPU programming. CUDA for years had no effective response against it unlike raytracing which will almost certainly see a competing implementation against it in less than 2 years since Nvidia's introduction ...

Also each vendors likely have very different HW implementations of raytracing as well under the hood so don't get the idea that either AMD or Intel will follow the DXR specifications as is without any of their own unique traits. Based off of AMD's patent alone they do hardware accelerated intersection testing in the TMUs and handle the BVH traversal in the shaders. Intel looks to be seriously pushing the idea of programmable traversal shaders from one of their research papers so it's very likely that they're customizing their shader/ISA design to handle this as efficiently as possible. As for Nvidia from extensive analysis, their RT cores do fixed function BVH traversal while the ray intersection tests are done on the tensor/shader cores ...

Intel's possible implementation vs NV's current implementation look to be extremely polar opposites of each other with wildly different performance characteristics depending on the workload design ...

Each vendor's design comes with their own set of strengths/limitations and by no means is Nvidia's implementation is impervious like you seem to imply it is. With AMD's RT implementation, doing customized intersection test shader programs would be a very bad idea since they can't use the intersection units built into their TMUs. Intel's traversal shader concept is good for reducing bandwidth consumption but there is some overhead involved with the additionally generated shader invocations. With Nvidia, ray traversal is totally fixed function so traversal shaders would end up being a bad idea on their hardware since they have to emulate this without being able to use their RT cores ...

Come DXR 2.0 specification and Microsoft chooses to standardize Intel's traversal shaders, it could very well end badly for Nvidia because then they'd be forced to significantly reachitect their HW designs for ray tracing compared to Turing or face a huge performance cliff if god forbid developers decide to write custom ray traversal shader programs ...

I'm not sure if you understood my post as I addressed things you said directly it in - such as the short market lead. I don't for a second believe RTX will become any standard or defacto winner. My opinion is simple: by getting RTX out so soon after the standard was announced, that developers were already given kits and hardware to work with. By the time AMD/Intel get their hardware/spec out, NV may have gotten RTX extensions rolled into engines such as Unreal 4, or Unity. Which would create headaches for AMD/Intel in the future as devs (you can pick whatever reason) often choose the easier path.
 

RetroZombie

Senior member
Nov 5, 2019
464
386
96
So true. NV could make any AMD card invalid with Ampere. I mean AMD can barley manage to keep up with performance/$ with a node advantage. Once NV is on 7nm(+), they could force AMD into making extreme price cuts.
And what price cuts nvidia made?
They slapped super in front of their cards name an here it is the same at the same price, so lovely from them.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,689
2,584
136
Regarding disbelief that the consoles are going to launch at such high spec points:

Note that current 7nm products are paying through the nose for silicon, and total 7nm capacity has been relatively low, and shared between AMD and all the high-end phone manufacturers (esp. Apple). TSMC advertised 7nm capacity just doubled, and Apple is moving off the node to 5nm for it's next flagship. Also, N7+ gets about 15% density improvement over N7.

What I'm getting at here is that the expectation of console costs should not be based on comparing costs with current products. What would be cost-effective to put in a console that launches today, and what will be cost-effective to put in a console when they actually launch, are probably going to be very different things.

I very much agree with the sentiment that it would not be smart to purchase a GPU today with the intent to use it for future cross-platform titles. By the time the consoles actually launch, the hardware in them is probably not going to be equivalent to high-end, or even "upper middle class" GPUs, that are on the market at the time. Which will be very different beasts from what are on the market at the relevant price points right now.
 
  • Like
Reactions: GodisanAtheist

RetroZombie

Senior member
Nov 5, 2019
464
386
96
This guy for half an hour rambling about how it's impossible Navi21 to have the 2x the speed of Navi 10:
I don't think Navi 21 will be 2x the speed of Navi 10

I was thinking at the end of the video I would have my doubts dissipated that it would be obviously impossible, but after all his math calculation plus mine own, he have me convinced that it will almost certainly that will be 2x faster. But again depends when it will be released.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
8,501
9,931
136
Regarding disbelief that the consoles are going to launch at such high spec points:

Note that current 7nm products are paying through the nose for silicon, and total 7nm capacity has been relatively low, and shared between AMD and all the high-end phone manufacturers (esp. Apple). TSMC advertised 7nm capacity just doubled, and Apple is moving off the node to 5nm for it's next flagship. Also, N7+ gets about 15% density improvement over N7.

What I'm getting at here is that the expectation of console costs should not be based on comparing costs with current products. What would be cost-effective to put in a console that launches today, and what will be cost-effective to put in a console when they actually launch, are probably going to be very different things.

I very much agree with the sentiment that it would not be smart to purchase a GPU today with the intent to use it for future cross-platform titles. By the time the consoles actually launch, the hardware in them is probably not going to be equivalent to high-end, or even "upper middle class" GPUs, that are on the market at the time. Which will be very different beasts from what are on the market at the relevant price points right now.

- With the big players moving off of 7nm DUV I figure AMD will have tons of spare capacity to rebadge Navi 10 and 14 down a tier into much more reasonable, high volume price points and slot the much lower volume 7nm EUV Navi 2x above them.

I fully expect the 5700XT to make it's return as the 6600XT @ $300 or less and the 5500XT as the 6500/6300XT for $150 or less. Basically where these parts should have been in the first place.

The 80/60cu Behmoth's can slot come in as the 6900/6800/unique name above them.
 

soresu

Diamond Member
Dec 19, 2014
4,244
3,748
136
Their execution against Intel is amazing. And it shows what AMD can do with strong products. Which is why I'm baffled with how they handled Navi. Since Rory Read AMD wanted to move way from being the "value brand" but all their actions afterwards kept them square in the "value brand" and bitmining basically glued them there for a while due to second market sales cannibalizing new.

It's being stuck between a rock and a hard place. AMD had a great product to compete with NV's product stack, and with the price increases now being "accepted" the driver issues are throwing more wrenches into the progression.

One of the reason NV was able to get GK104 where it landed was due to Tahiti having bad drivers, issues that weren't fixed for months. It's hard to get buyers to buy a product at similar price of a competitor if you're getting bad PR from places such as r/AMD for bad drivers. They had to sticky a PSA directly from AMD telling users to disable features they bought Navi specifically for. That makes the price hike harder to swallow for some users/buyers and you can read people just jumping ship.
Simple really - the internal focus for years was on Zen design and engineering towards completion, even after the initial Ryzen release until they had a nice clear roadmap and path of execution for years down the line.

Unfortunately that focus was only much more recently switched over the GPU efforts - I believe the departure of Koduri also hurt the driver division, he built it up considerably and it may be that he had no clear succession in place for a departure (it still seems uncertain as to whether he was pushed out or left of his own devices).

As to Tahiti problems vs Kepler, it wasn't something I noticed at all with my Pitcairn 7870 board at the time, so it may have been specific to Tahiti 79xx itself.

My previous GPU board was a 5770 Juniper which had considerable problems with its release BIOS/firmware, causing system reboots at random moments - far worse than driver problems in my opinion, considering I had no end of trouble finding out how to flash a new BIOS version when it finally landed.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
- With the big players moving off of 7nm DUV I figure AMD will have tons of spare capacity to rebadge Navi 10 and 14 down a tier into much more reasonable, high volume price points and slot the much lower volume 7nm EUV Navi 2x above them.

I fully expect the 5700XT to make it's return as the 6600XT @ $300 or less and the 5500XT as the 6500/6300XT for $150 or less. Basically where these parts should have been in the first place.

The 80/60cu Behmoth's can slot come in as the 6900/6800/unique name above them.
There is a possibility that with freed Wafer space we will see much more pocket saving prices during 2020.

Don't make assumptions about what you will see. We have been way too many times disappointed with pricing this generation.
 
  • Like
Reactions: Bouowmx

Krteq

Golden Member
May 22, 2015
1,010
730
136
We will see RDNA2 card (Navi 21 most probably) in action at CES... but no performance figures will be revealed (obviously - still about 6 months to release)

enc-gyvwoaee4es8ajth.jpg


Twitter - Inside Man
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
A few things here:
DXR is the standard. Hardware vendors are only mapping the API to their hardware. And DXR is for real time graphics. A pure shader compute solution is still to slow for games.
nVidias RT Cores are doing every calculation after launching rays till the hit/miss. This frees the SM from additional work. This is far more advanced than AMDs approach where they are still needing the compute units to calculate certain operations.

And this is the biggest difference. nVidia knew in advanced that Turing is the beginning of real time raytracing in games. So they designed Turing specific for this workload. One the other hand AMD was only interesting in rasterizing. That's the reason why Navi has so many transistors without having the same featureset than for example TU106 (RTX2070).

Again, I reiterate that DXR is vendor agnostic API ...

RT cores does NOT do everything that is needed for ray tracing. The RT cores seem to only do BVH traversal and does not aid in ray intersection tests since those are in almost certainly done on the shaders. AMD took a different approach to implementing DXR by hardware accelerating the ray intersection tests and doing the BVH traversal on shaders ...

It's not a matter of just compute vs fixed function, it's also a matter of where to accelerate the said functionalities ...

As far as touting Turing's advanced feature set that does not matter in the face of Microsoft who will solely decide what hardware features to expose and there's absolutely nothing Nvidia can do about since only Microsoft effectively dictates those terms. Turing's RT cores could very well potentially become paperweight over night if Microsoft wants to standardize Intel's traversal shaders ...
 

eek2121

Diamond Member
Aug 2, 2005
3,472
5,147
136
One thing you won't see is cheaper pricing. GDDR6 prices are expected to go up this year due to demand. Having a bunch of consoles using GDDR6 probably isn't going to help matters any.

As far as Navi and all that NVIDIA nonsense, If AMD released RDNA with just the improvements from 7nm EUV alone they could likely squeeze in more execution units as well as bump up clocks by around 10%. I imagine they could hit 30-40% faster pretty easily. However, I don't expect much about RDNA2 at CES beyond POSSIBLY a teaser (at most). They still haven't even launched the 5600XT yet, and it makes sense to wait for NVIDIA to drop their own 7nm design at this point. I also think, given the rumored 9-12 TFLOPS on the console parts, that 'next generation RDNA' parts are coming this year, and they will be SIGNIFICANTLY faster.

Rest assured though, AMD CAN execute. They HAVE executed in the past. If they had the desire to do so currently, they could EASILY release a 2080 (non-ti) competitor. They just choose not to. Remember also that they haven't had much time to reorganize things since Raja left.
 

Veradun

Senior member
Jul 29, 2016
564
780
136
it makes sense to wait for NVIDIA to drop their own 7nm design at this point.

I disagree here, when they will have a new products ready they will deliver. It might make some sense to wait a week or two when in doubt about pricing, not months, and nvdia's 7n stuff is still a long way out
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
You got it so backwards that I wouldn't be surprised if you drove in reverse everywhere.
If anything, it was more of an undercut from Nvidia, because the design ideas and parameters had to be set in stone for the new consoles at least 3 years ago. Don't think for a moment that MS would implement DXR so fast just for a laughable Turing RTX launch with literally no playable games. It was always about the next-gen consoles (easy conclusion in hindsight).

I would like to thank Nvidia for their efforts in providing valuable experience with some parts of the API in the previous months.
I don't think ray tracing was in the consoles, Nvidia blindsided them by producing a ray tracing implementation that worked which no one expected so soon. After that happened the console makers demanded it (they couldn't risk the other console maker having it and them missing out). AMD has been working on putting it in since then. I suspect we already have the architecture originally planned for the consoles - RDNA.
 
Last edited:
  • Haha
Reactions: lobz

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
I don't think ray tracing was in the consoles, Nvidia blindsided them by producing a ray tracing implementation that worked which no one expected that soon. As soon as that happened the console makers demanded it (they couldn't risk the other console maker having it and them missing out). AMD has been working on putting it in since then.

Nop,

In order for RTX to work it needs MS DX12 and DXR. That means that MS and AMD knew about NVIDIAs intentions before the RTX series of GPUs were released.
Just to remind you all that NVIDIA announced RTX the same day that MS announced DXR.

In conjunction with Microsoft’s new DirectX Raytracing (DXR) API announcement, today NVIDIA is unveiling their RTX technology, providing ray tracing acceleration for Volta and later GPUs. Intended to enable real-time ray tracing for games and other applications, RTX is essentially NVIDIA's DXR backend implementation.


Also the same day AMD announced their real time RT for developers.

First disclosed this evening with teaser videos related to a GDC presentation on Unity, today AMD is announcing two developer-oriented features: real-time ray tracing support for the company's ProRender rendering engine, and Radeon GPU Profiler 1.2.

Though Microsoft’s DirectX Raytracing (DXR) API and NVIDIA’s DXR backend “RTX Technology” were announced today as well, the new ProRender functionality appears to be largely focused on game and graphical development as opposed to an initiative angled for real-time ray tracing in shipping games.
 
Last edited:

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
I don't think ray tracing was in the consoles, Nvidia blindsided them by producing a ray tracing implementation that worked which no one expected so soon. After that happened the console makers demanded it (they couldn't risk the other console maker having it and them missing out). AMD has been working on putting it in since then. I suspect we already have the architecture originally planned for the consoles - RDNA.
I had to laugh at this, sorry man.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
8,501
9,931
136
I don't think ray tracing was in the consoles, Nvidia blindsided them by producing a ray tracing implementation that worked which no one expected so soon. After that happened the console makers demanded it (they couldn't risk the other console maker having it and them missing out). AMD has been working on putting it in since then. I suspect we already have the architecture originally planned for the consoles - RDNA.

- Knowing AMD I think the biggest problem for them was actually the software/driver side of things. NV has a beef developer relations program and a lot of experience working vendor specific stuff into games, but AMD really doesnt (except when Raja was around).

I think we got a situation where Navi10 was a mid-range chip entry into the high end space (not entirely unlike the GTX 680, as was being argued earlier) and AMD just ran with it, while working through their RT software implementation in windows.

PS5 Dev kits are out there, so AMD must have offered up some sort of RT supporting silicon already.
 

lifeblood

Senior member
Oct 17, 2001
999
88
91
AMD is late to the RT game. Of course it's GPU division has been late to almost everything for the past decade so no surprise. It's possible they were blindsided by Nvidia with RT, or more likely they knew about it but just didn't have much in the way of resources to respond to it effectively. So many times in business (and politics) we've seen things developing but are so focused or overwhelmed by other things that we didn't respond in a timely manner. AMD has been struggling to not go under so it's no surprise their RT response is late and (probably) inferior.
 
  • Like
  • Haha
Reactions: CHADBOGA and lobz

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
AMD is late to the RT game. Of course it's GPU division has been late to almost everything for the past decade so no surprise. It's possible they were blindsided by Nvidia with RT, or more likely they knew about it but just didn't have much in the way of resources to respond to it effectively. So many times in business (and politics) we've seen things developing but are so focused or overwhelmed by other things that we didn't respond in a timely manner. AMD has been struggling to not go under so it's no surprise their RT response is late and (probably) inferior.
XD

How can any company care about desktop if they have such perception like this shows?
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
AMD is late to the RT game. Of course it's GPU division has been late to almost everything for the past decade so no surprise. It's possible they were blindsided by Nvidia with RT, or more likely they knew about it but just didn't have much in the way of resources to respond to it effectively. So many times in business (and politics) we've seen things developing but are so focused or overwhelmed by other things that we didn't respond in a timely manner. AMD has been struggling to not go under so it's no surprise their RT response is late and (probably) inferior.
Do you attend the same stand-up comedy seminar as Dribble?
 
  • Like
Reactions: Glo.

uzzi38

Platinum Member
Oct 16, 2019
2,747
6,657
146
AMD is late to the RT game. Of course it's GPU division has been late to almost everything for the past decade so no surprise. It's possible they were blindsided by Nvidia with RT, or more likely they knew about it but just didn't have much in the way of resources to respond to it effectively. So many times in business (and politics) we've seen things developing but are so focused or overwhelmed by other things that we didn't respond in a timely manner. AMD has been struggling to not go under so it's no surprise their RT response is late and (probably) inferior.

Did I not already address how it is physically impossible for AMD to both simultaneously be blindsided by RTRT whilst also releasing their own solution a mere 2 years after Nvidia released theirs?

How many times will this be brought up by people with a complete and utter lack of understanding of how long in advance roadmaps are set and how long it takes to work on something like this?

If even Nvidia had to create a solution from scratch now, with a 2 year time limit, they would be fully incapable of doing so. I don't even need to bother discussing AMD being capable of doing it.

Also, speaking of late, where's Turing+1? Not anywhere on it's way to consumers :)
 
Last edited:
Status
Not open for further replies.