• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Does the RTX series create an openning for AMD?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
The opening might be with Intel. They can wait with their architecture and see which way the wind blows with NV's ray tracing push.
 
The opening might be with Intel. They can wait with their architecture and see which way the wind blows with NV's ray tracing push.

While it will be great to have Intel as a third player. They are aren't even showing up till 2020, which will be about when you would expect a 3000 series from NVidia, on a refined, higher yielding 7nm process.

To expect Intel to jump from disappointing IGPs, to having a serious competitor to RTX 3000 series, seems like a fairy tale. I definitely think the odds are higher that AMD will have something good by then, than that Intel will go from crappy IGPs to outdoing AMD, to taking advantage of of some opening at NVidia.

Also I really think the opening is a small one. Mainly it's against RTX 2000 series. NVidia may have really pushed die size, to get that RT HW in this generation, but the proportional penalty for that RT HW will likely be smaller when working with the bigger transistor budget of a refined 7nm. Meaning that opening is lot less significant, and also there is some inevitability to Ray Tracing.

Basically the farther forward you go, the less overhead tax you pay for RT HW, and the more you need that RT HW. So IMO, the opening is really gone by 2020.

Only AMD can have any chance of exploiting it in 2019.
 
Well, I mean that Intel is supposedly developing a new GPU and they have time to modify it and add some sort of RT core if RT catches on quick.
Whatever AMD has in the works is likely too far along to change course.

Theoretically with multi-gpu under DX12, one card could do the RT grunt work.

Maybe someone could come up with a RT co-processor GPU for low cost?

Of course if RT doesn't become popular, then it's a moot point.
 
Well, I mean that Intel is supposedly developing a new GPU and they have time to modify it and add some sort of RT core if RT catches on quick.
Whatever AMD has in the works is likely too far along to change course.

Theoretically with multi-gpu under DX12, one card could do the RT grunt work.

Maybe someone could come up with a RT co-processor GPU for low cost?

Of course if RT doesn't become popular, then it's a moot point.
But, but, but.

Aren't we told it took Nvidia 10 years of R&D to get this out. Now Intel in less than 2. For sure, this release is causing a lot of cognitive dissonance.
 
Well, I mean that Intel is supposedly developing a new GPU and they have time to modify it and add some sort of RT core if RT catches on quick.
Whatever AMD has in the works is likely too far along to change course.

It looks to me that the central argument of this thread, was that since NVidia is devoting so much die area to ray tracing. AMD focusing on rasterzation could catch them out on standard games: "... focusing performance increases in standard rasterized tasks..."

AMD could release a card that didn't waste die area on any specific RT HW, and thus be better at standard games.

Not that there was an opening for AMD (or Intel) to catch NVidia on Ray Tracing itself.

As far as Intels experience, that was running some x86 algorithm on Intel CPUs and then Larrabee (which was also x86).

I really doubt much from that can hold a candle to dedicated RT HW, and DL network to denoise.
 
This would be nice to have an HD4870 moment again.
Or even better an HD5870 moment in Q1 2019.

It doesn't sound like there's going to be a gaming version of Vega 20, and if there is it'd probably have to be very expensive, likely more than the 2080's fake MSRP of $699 which could be real by then.

Navi might be interesting, but it sounds like at least in 2019 it's only targeting Vega 64-type performance. That might work if they are targeting $300 or less for it.
 
I am waiting for octane render support on amd for almost decade. It is so sad they apparently produce compute heavy cards like Vega, with even bigger potential compute performance than Nvidia gaming centered stuff; yet you cant never use it cause amd has been incapable working wih octane devs to help them release working product... see this:

https://render.otoy.com/forum/viewtopic.php?f=9&t=66456

Safe to say, i lost all hope in AMD.
Meanwhile, i can get Turing with its RTX “gimmick” which will probably speed up octane 5x to 8x...

https://www.youtube.com/watch?v=6l2vQ8eRbiY&feature=youtu.be

Those nvidia prices kill me, but th only other choice i have is not to buy, because amd simply wont provide alternative.
 
If AMD pushed the biggest chip they could make profitably at $250, with no new features that require new work from Devs, on 12nm they could absolutely take advantage of this opening. They would have to have designed it years ago though due to lead times.

A super GDDR6 Polaris that is Vega level features at 12nm as big as can be made with a respectable margin at $250 on 12nm, basically. Pure sweet spot GPU. It would sell serious volume. Especially if they keep pushing clockspeed like they did on Vega. Basically 2018 version of the 4870/50 stack
 
The designed years ago is a problem.

The only magic they could pull out of their hat maybe is the console APUs as a separate dGPU.

Apparently the PS4 Pro is 2304:64:256-bit and the Xbox 1 X is 2560:32:384-bit. Even if clocked at desktop speeds of 1.3GHz+, unfortunately either would just be a modest bump over the RX 580's 2304:32:256-bit.
 
You'd think they have some sort of Polaris successor in the pipeline that is close to the ideal for this opening. If only because $200-250 has been the sweet spot for many years now, and Polaris is starting to get old. My guess is that it might be too small still. A 980 style 400mm2, modest size die, modest sized bus but at a modest price instead of $550 would be a killer GPU. All rasterization hardware. You could have fancy compute+ the fixins' card line too but I think there is an opening for a no bells and whistles, just straight up rasterization card.
 
It doesn't sound like there's going to be a gaming version of Vega 20, and if there is it'd probably have to be very expensive, likely more than the 2080's fake MSRP of $699 which could be real by then.

Navi might be interesting, but it sounds like at least in 2019 it's only targeting Vega 64-type performance. That might work if they are targeting $300 or less for it.

Take Vega-64
Port it to 7nm TSMC (40% speed increase at same power + double the density)

And you get the following,

7nm Vega-64 with 30% higher clocks at half the die size.
= close to RTX2080 performance with a die size of only ~250mm2

Sell it at $599, end of story.
 
Take Vega-64
Port it to 7nm TSMC (40% speed increase at same power + double the density)

And you get the following,

7nm Vega-64 with 30% higher clocks at half the die size.
= close to RTX2080 performance with a die size of only ~250mm2

Sell it at $599, end of story.
And Nvidia's 7nm will quickly come out in response.
 
And Nvidia's 7nm will quickly come out in response.

No it will not,
First because there will not be a lot of 7nm volume in early 2019 to satisfy NV sales.
And secondly NV will not launch a new GPU earlier than September 2019.

If AMD could release a 7nm Vega in Q1 2019 , they would have a window of 6-9 months.
 
If AMD could release a 7nm Vega in Q1 2019 , they would have a window of 6-9 months.
But how long ago, would AMD have had to have the product design in the pipeline, if tape-out was to occur in time to take rasterization card market-share from Nvidia?
 
I think AMD is finished competing with Nvidia in the $300usd+ market. The only architecture they have that is competitive is polaris. They need to get polaris on 12nm or 7nm with higher clocks to compete with the 1050/1060/2060. Vega was absolute trash and everyone who adopted one was absolutely ripped off by AMD.
 
But how long ago, would AMD have had to have the product design in the pipeline, if tape-out was to occur in time to take rasterization card market-share from Nvidia?

It would be shocking if AMD didn't have something in the pipeline for 2019.

The question will be how competitive is it.
 
But how long ago, would AMD have had to have the product design in the pipeline, if tape-out was to occur in time to take rasterization card market-share from Nvidia?
It's obvious that AMD wanted to be early on 7nm. Their CPUs are going that route. Is it too much of a stretch to think that all the R&D for 7nm circuitry would not also be used for the GPU division? Traditionally they were early to new nodes, most times leading Nvidia, and they must certainly know that this is one way to reduce the lead Nvidia has in architecture, which by the way has existed for several years now. This is not too outrageous to imagine.
 
It looks to me that the central argument of this thread, was that since NVidia is devoting so much die area to ray tracing. AMD focusing on rasterzation could catch them out on standard games: "... focusing performance increases in standard rasterized tasks..."

AMD could release a card that didn't waste die area on any specific RT HW, and thus be better at standard games.

Not that there was an opening for AMD (or Intel) to catch NVidia on Ray Tracing itself.

As far as Intels experience, that was running some x86 algorithm on Intel CPUs and then Larrabee (which was also x86).

I really doubt much from that can hold a candle to dedicated RT HW, and DL network to denoise.
It's the same thing for both of them :
You can fill rooms w/ denoising algorithms.

I want Nvidia to clarify it's Gigaray claims. I feel Jensen took more than artistic license when coming up w/ the figure and its likely the 'upsampled' tensor core result. AMD essentially built a versatile truly asyncrhonous pipeline that could be repurposed. Nvidia b.s'd there's and thus needs dedicated cores.

I want an apple to apple detailed comparison between Nvidia and AMD's real time hybrid ray tracing solution. No marketing nonsense and no cornball claims coming from the denoised/upsampled output. How much is the ray trace portion actually processing in raw compute numbers. If Nvidia truly is doing 10x as much processing than AMD, Jensen wouldn't shut his mouth about it. So, they likely are not. Ray trace cores are just a bunch of ALUs in the SM that takes the place of the double precision compute portion of Volta. It's clocked and locked at the same rates at the rest of the SM. So, they're not doing anything magical in them.
 
It's the same thing for both of them :
You can fill rooms w/ denoising algorithms.

I want Nvidia to clarify it's Gigaray claims. I feel Jensen took more than artistic license when coming up w/ the figure and its likely the 'upsampled' tensor core result. AMD essentially built a versatile truly asyncrhonous pipeline that could be repurposed. Nvidia b.s'd there's and thus needs dedicated cores.

I want an apple to apple detailed comparison between Nvidia and AMD's real time hybrid ray tracing solution. No marketing nonsense and no cornball claims coming from the denoised/upsampled output. How much is the ray trace portion actually processing in raw compute numbers. If Nvidia truly is doing 10x as much processing than AMD, Jensen wouldn't shut his mouth about it. So, they likely are not. Ray trace cores are just a bunch of ALUs in the SM that takes the place of the double precision compute portion of Volta. It's clocked and locked at the same rates at the rest of the SM. So, they're not doing anything magical in them.
All that matters is the benchmarks on review day and when RT games are out, tested and analyzed and we see what we get in terms of image quality vs performance. Consumers care little about whats under the hood, only the actual performance and what RT actually delivers for them. If anything is out of place, critics will have a field day and Nvidia will be savaged left and right and they know it.
 
All that matters is the benchmarks on review day and when RT games are out, tested and analyzed and we see what we get in terms of image quality vs performance. Consumers care little about whats under the hood, only the actual performance and what RT actually delivers for them. If anything is out of place, critics will have a field day and Nvidia will be savaged left and right and they know it.
What matters are raw performance numbers if I am making a compute comparison.
Performance numbers derive from what's under the hood not what tickles you when you play a video game.
So, Nvidia has to justify how they arrived at their gigaray figure by detailing what's under the hood.
I don't think you realize what I'm talking about so re-read my post.

Rays are processed at a certain amount per second just like you have TFLOP numbers for GPUs.
There are rays produced by the ray trace pipeline and then they are upsampled and denoised later.
Jensen could be talking about this result. If he is, apples to apples, you also go w/ the unsampled and denoised result of AMD video cards which effectively do the same foolishness. I am not interested in this or would have it split into another category. I want to know the raw compute number.

Nvidia has done this garbage before so its not above them.
A benchmark is a benchmark. No one knows what underlays the number Jensen gave it is actually scene dependent.

No one cares about game performance except with RTX off and how bad of a hit you take with it ON.
There's nothing to compare qualitatively to RTX on output... So, its all about performance and that performance relates to what's under the hood.
 
Take Vega-64
Port it to 7nm TSMC (40% speed increase at same power + double the density)

And you get the following,

7nm Vega-64 with 30% higher clocks at half the die size.
= close to RTX2080 performance with a die size of only ~250mm2

Sell it at $599, end of story.

Vega as gaming GPU is pretty much dead chip. Why even bother with very expensive shrink if they have newer gpu architecture in pipeline ? Nonsense....
 
Back
Top