Discussion Metro Dev: Ray Tracing Is Doable via Compute Even on Next-Gen Consoles, RT Cores Aren’t the Only Way

Det0x

Senior member
Sep 11, 2014
308
24
116
#1
Rendering Programmer Ben Archard said the following when discussing ray tracing with Eurogamer’s Digital Foundry:

It doesn’t really matter – be it dedicated hardware or just enough compute power to do it in shader units, I believe it would be viable. For the current generation – yes, multiple solutions is the way to go.

This is also a question of how long you support a parallel pipeline for legacy PC hardware. A GeForce GTX 1080 isn’t an out of date card as far as someone who bought one last year is concerned. So, these cards take a few years to phase out and for RT to become fully mainstream to the point where you can just assume it. And obviously on current generation consoles we need to have the voxel GI solution in the engine alongside the new ray tracing solution. Ray tracing is the future of gaming, so the main focus is now on RT either way.

In terms of the viability of ray tracing on next generation consoles, the hardware doesn’t have to be specifically RTX cores. Those cores aren’t the only thing that matters when it comes to ray tracing. They are fixed function hardware that speed up the calculations specifically relating to the BVH intersection tests. Those calculations can be done in standard compute if the computer cores are numerous and fast enough (which we believe they will be on the next gen consoles). In fact, any GPU that is running DX12 will be able to “run” DXR since DXR is just an extension of DX12.

Other things that really affect how quickly you can do ray tracing are a really fast BVH generation algorithm, which will be handled by the core APIs; and really fast memory. The nasty thing that ray tracing does, as opposed to something like say SSAO, is randomly access memory. SSAO will grab a load of texel data from a local area in texture space and because of the way those textures are stored there is a reasonably good chance that those texels will be quite close (or adjacent) in memory. Also, the SSAO for the next pixel over will work with pretty much the same set of samples. So, you have to load far less from memory because you can cache and awful lot of data.

Working on data that is in cache speeds things up a ridiculous amount. Unfortunately, rays don’t really have this same level of coherence. They can randomly access just about any part of the set of geometry, and the ray for the next pixels could be grabbing data from and equally random location. So as much as specialised hardware to speed up the calculations of the ray intersections is important, fast compute cores and memory which lets you get at your bounding volume data quickly is also a viable path to doing real-time RT
.

We’ll likely know more about the feasibility of ray tracing via compute when AMD reveals more details about the Navi GPU architecture, which is believed to have been chosen by Sony and Microsoft to power their next consoles.

Read the full interview here
 
Jun 8, 2003
14,190
205
126
#2
funny I was just talking about this in another thread.
good stuff thanks.
 

Mopetar

Diamond Member
Jan 31, 2011
4,525
458
126
#3
It's technically possible, but no one will do it. We've already seen how much NVidia cards with dedicated hardware struggle to produce acceptable frame rates even with the most powerful hardware. It's unlikely that the next generation consoles will have anything much more powerful than a 2060, if they even reach that level of performance.

The real goal for this generation of consoles will be to provide a smooth and reliable frame rate at 4K because those have become so cheap that almost anyone can afford to purchase a 4K TV now that retailers are carrying sets that are less than $300.
 

Guru

Senior member
May 5, 2017
647
235
86
#4
Next gen consoles are going to come in 2020, probably featuring a custom Navi GPU. I don't really think MS or Sony will be too keen on ray tracing. I think since we saw a lot of complaints this gen with games barely even running 30fps, they are more likely to focus on performance and target 60fps across a bigger array of games or at least if they are smart they should.

Barely 30fps gaming isn't cutting it anymore.
 

maddie

Platinum Member
Jul 18, 2010
2,736
697
136
#5
Next gen consoles are going to come in 2020, probably featuring a custom Navi GPU. I don't really think MS or Sony will be too keen on ray tracing. I think since we saw a lot of complaints this gen with games barely even running 30fps, they are more likely to focus on performance and target 60fps across a bigger array of games or at least if they are smart they should.

Barely 30fps gaming isn't cutting it anymore.
This leads to claiming that there will be no RT in consoles till 2025-2026, when the next gen arrives. Are you saying this?
 

NTMBK

Diamond Member
Nov 14, 2011
8,366
359
126
#6
This leads to claiming that there will be no RT in consoles till 2025-2026, when the next gen arrives. Are you saying this?
It's a very computationally expensive technique, and you can get much more bang-for-your-buck with raster effects... And consoles are all about bang-for-your-buck.
 

maddie

Platinum Member
Jul 18, 2010
2,736
697
136
#7
It's a very computationally expensive technique, and you can get much more bang-for-your-buck with raster effects... And consoles are all about bang-for-your-buck.
I agree that there are many computations involved, but Microsoft appears to disagree with the present popular opinions in this thread as to how this will be accomplished. They are almost openly telling everyone that their implementation of RT (XBox) will use compute units and not a specialized engine.

https://blogs.msdn.microsoft.com/directx/2018/03/19/announcing-microsoft-directx-raytracing/

Quote:
"You may have noticed that DXR does not introduce a new GPU engine to go alongside DX12’s existing Graphics and Compute engines. This is intentional – DXR workloads can be run on either of DX12’s existing engines. The primary reason for this is that, fundamentally, DXR is a compute-like workload. It does not require complex state such as output merger blend modes or input assembler vertex layouts. A secondary reason, however, is that representing DXR as a compute-like workload is aligned to what we see as the future of graphics, namely that hardware will be increasingly general-purpose, and eventually most fixed-function units will be replaced by HLSL code. The design of the raytracing pipeline state exemplifies this shift through its name and design in the API. With DX12, the traditional approach would have been to create a new CreateRaytracingPipelineState method. Instead, we decided to go with a much more generic and flexible CreateStateObject method. It is designed to be adaptable so that in addition to Raytracing, it can eventually be used to create Graphics and Compute pipeline states, as well as any future pipeline designs."


edit:
DXR has obviously been in the pipeline for a long time now with all the players involved in the process. My belief is that Nvidia attempted to lock in a propriety tech as the established means of doing RT. They felt rushed and this is why a lot are now saying that the tech is too ambitious for 12nm, but wait for 7nm. The slower relative sales on this new gen are sabotaging those efforts.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
4,525
458
126
#8
I agree that there are many computations involved, but Microsoft appears to disagree with the present popular opinions in this thread as to how this will be accomplished. They are almost openly telling everyone that their implementation of RT (XBox) will use compute units and not a specialized engine.
Which means that it will be slower than implementations that use dedicated hardware, and unlikely to see widespread utilization in many titles as a result. Maybe it gets used for a small number of effects so that they can claim that they're using it, but consoles tend to use mid-range GPUs at best as a result or needing to keep costs and power use low. We're talking something around a 2060 at best.

Ray Tracing is going to be little more than a "Blast Processing"-esque marketing buzzword as far as consumers are concerned. We've already seen how loathe developers have been to use DX12 and build engines from the ground up to take advantage of existing features, so I'm not expecting that they'll suddenly rush to embrace ray tracing either.
 

Stuka87

Diamond Member
Dec 10, 2010
4,265
196
126
#9
I do not see this next gen of consoles using ray tracing at all. MAYBE it will be used in very specific areas, like seeing your characters reflection in a mirror or something. The costs are just too prohibitive for such a small improvement in quality.
 

maddie

Platinum Member
Jul 18, 2010
2,736
697
136
#10
Which means that it will be slower than implementations that use dedicated hardware, and unlikely to see widespread utilization in many titles as a result. Maybe it gets used for a small number of effects so that they can claim that they're using it, but consoles tend to use mid-range GPUs at best as a result or needing to keep costs and power use low. We're talking something around a 2060 at best.

Ray Tracing is going to be little more than a "Blast Processing"-esque marketing buzzword as far as consumers are concerned. We've already seen how loathe developers have been to use DX12 and build engines from the ground up to take advantage of existing features, so I'm not expecting that they'll suddenly rush to embrace ray tracing either.
To be honest, I don't know if this is true as a complete statement.
"Which means that it will be slower than implementations that use dedicated hardware, and unlikely to see widespread utilization in many titles as a result"

Is it possible to introduce a modified compute pipeline with new instructions that accelerate RT? I see no reason why this can't be true. With 7nm we already have 2X improvement in efficiency and together they might make RT on general purpose units achievable. To outright claim that without specialized hardware, it will be minimal at best, is like claiming to know that all aircraft had to be pusher canard biplanes after the Wrights first flew.
 

Mopetar

Diamond Member
Jan 31, 2011
4,525
458
126
#11
To be honest, I don't know if this is true as a complete statement.
"Which means that it will be slower than implementations that use dedicated hardware, and unlikely to see widespread utilization in many titles as a result"

Is it possible to introduce a modified compute pipeline with new instructions that accelerate RT? I see no reason why this can't be true. With 7nm we already have 2X improvement in efficiency and together they might make RT on general purpose units achievable. To outright claim that without specialized hardware, it will be minimal at best, is like claiming to know that all aircraft had to be pusher canard biplanes after the Wrights first flew.
We see something like this every console cycle where manufacturers trot out something that's going to be the big next thing. We've had motion controls and VR in recent memory. There's a lot of hype surrounding these technologies, but they don't see widespread adoption or support and are mostly forgotten about by the time the next consoles come around. Seems like Ray Tracing fits into this mold. It'll get talked up a lot, a few games might introduce it sparingly, but it will largely be passed over.

It's not difficult to look at historical trends for what consoles used as far as PC-GPU equivalent, especially since they've all essentially moved to using them since the original Xbox. It's also not a stretch to claim that dedicated hardware is much faster than a software solution, even if that software solution is running on powerful hardware. I recall that at one point (this was several years ago so I forget the exact models) that iMovie on an iPad could render movies faster than could one of their top end desktops because the SoC in the iPad had dedicated hardware for encoding and decoding h.264 video.

I don't think they will use specialized hardware either. If you look at the transistor budget that NVidia has with Turing to enable ray tracing and the so far lackluster results, neither Sony or Microsoft can afford to implement that kind of hardware solution when the competition would just leave it out and dedicate more transistors to having additional shaders. Most developers aren't going to make extensive use of it because it will hurt performance and it comes at the cost of using those shaders for something else.

If there were a silver bullet that gave us magical and massive improvements in ray tracing performance, NVidia would already be using it or something similar to it. We already know how well AMD has been able to deliver on magic drivers to enable supposed big performance improvements. Sure anything is possible, but no one is going to bet even money on something like this.
 
Mar 11, 2004
18,913
1,108
126
#12
Are people really surprised by this? Even though some people tried to claim that Microsoft released DXR because of Nvidia's RTX, I don't think that's true, and in fact, I think that highlights that RTX is not nearly as specialized as Nvidia has acted like it is, and that AMD's strong compute capability could be well suited for the same thing. After seeing that RTX and Tensor Cores are integrated into the normal GPU pipeline, I think its probably a lot like AMD's compute units. But Microsoft was clear from the outset that DXR would be capable of running on hardware that already was out from multiple vendors (even Intel).

The ray-tracing using RTX that we've seen so far is just Gameworks type of lock-in where, the only reason its not using AMD stuff is because Nvidia blackboxed their RTX like they were doing with Gameworks, and then did a lot of code and paid devs to use it.

It's technically possible, but no one will do it. We've already seen how much NVidia cards with dedicated hardware struggle to produce acceptable frame rates even with the most powerful hardware. It's unlikely that the next generation consoles will have anything much more powerful than a 2060, if they even reach that level of performance.

The real goal for this generation of consoles will be to provide a smooth and reliable frame rate at 4K because those have become so cheap that almost anyone can afford to purchase a 4K TV now that retailers are carrying sets that are less than $300.
I'm very skeptical how "dedicated" the RTX bits actually are. I think Nvidia is pulling wool over average peoples' eyes by acting like they're highly specialized. I personally believe they were likely implemented by request of some big HPC customers, and Nvidia is touting them for ray-tracing when I think they're probably more just another compute unit, that AMD is offering similarly in their GPUs (in Vega). Same with the tensor units, AMD is implementing those features actually fairly similarly to how Nvidia is (meaning its integrated into the traditional GPU pipeline), but Nvidia is acting like they're highly specialized, and I don't believe that to actually be true, I think its just further extension of GPU compute that was already happening. And I think that's why Microsoft felt the time was ripe to do this hybrid ray-tracing as the compute capability has gotten strong enough to make it feasible (and they expect it to continue to grow, so as GPUs continue on that path they expect it'll improve).

We'll see, but I think developers will have freedom. Frankly the non-native rendering and upscaling on the PS4 Pro and One X are quite good, so I'm not sure that native matters that much. I think ray-tracing could shine for some uses (I actually think it'll be especially good for VR, but will require tricks to be worth it, or will be very simplified games - think something like Geometry Wars or those other similar visually impressive but simple gameplay puzzle games that have been on Sony's stuff there was a fireworks one and then some other one on the PS4 where it was rotating around like a cylinder shaped thing; there's been others that are simple-ish but mixing ray-tracing and VR would be really impressive).

And I think you actually hit on another aspect, consoles are typically the base point, so more powerful consoles means game engines evolving and we've seen them consistently improve capabilities.

I do not see this next gen of consoles using ray tracing at all. MAYBE it will be used in very specific areas, like seeing your characters reflection in a mirror or something. The costs are just too prohibitive for such a small improvement in quality.
I think it'll depend on the developer, and I expect we'll see some indie games even that use ray-tracing. I think smaller stuff like puzzle games that are pretty simplified compared to other genres can make use of ray-tracing, and I think VR (I'm assuming we'll be seeing an upgraded PSVR for instance, and I have a hunch we might finally see a Microsoft VR headset, possibly coinciding with updated Mixed Reality PC headsets) would do well with ray-tracing. A lot of VR stuff is simplified already, and I think the ray-tracing might even potentially factor into gameplay (think games that use laser beams, where you could make a puzzle game out of it, where it won't need high framerates or lots of complexity, but that ray-tracing could make be visually very impressive).
 

Mopetar

Diamond Member
Jan 31, 2011
4,525
458
126
#13
I'm very skeptical how "dedicated" the RTX bits actually are. I think Nvidia is pulling wool over average peoples' eyes by acting like they're highly specialized. I personally believe they were likely implemented by request of some big HPC customers, and Nvidia is touting them for ray-tracing when I think they're probably more just another compute unit, that AMD is offering similarly in their GPUs (in Vega).
Why would NVidia bother including them in their mainstream consumer GPUs then? They'd be better off limiting that hardware to professional cards which sell at much higher prices and not including it in their consumer cards which lowers the cost of manufacturing them.

You could perhaps argue that NVidia isn't getting good enough results from dedicated hardware in order to justify its inclusion, and I might be inclined to agree, but the idea that AMD will be able to get similar results with generalized compute hardware is just wishful thinking. If AMD were that good and NVidia that incompetent, the GPU market would look completely different than it does right now.
 
Oct 27, 2006
19,794
302
126
#14
Why would NVidia bother including them in their mainstream consumer GPUs then? They'd be better off limiting that hardware to professional cards which sell at much higher prices and not including it in their consumer cards which lowers the cost of manufacturing them.

You could perhaps argue that NVidia isn't getting good enough results from dedicated hardware in order to justify its inclusion, and I might be inclined to agree, but the idea that AMD will be able to get similar results with generalized compute hardware is just wishful thinking. If AMD were that good and NVidia that incompetent, the GPU market would look completely different than it does right now.
This is VERY true. I'm a massive critic of the RTX generation, but Nvidia spent big on Tensor and dedicated RT semiconductor tech, and fitting it to a desktop GPU and still have enough legacy performance to not actually regress in raster work meant going big. As a result, the dies are absolutely enormous, which greatly increases the costs of production, lowers yields, means more expensive PCBs and cooling design, etc.

If it weren't important in the push for RT, they could have skipped it, and had much more profitable higher selling, less cumbersome GPUs on offer. Something with the die size of the 2060 yet beyond 2080 perf, and hugely profitable even at $499.

You can look up stuff about the Turing Raytracing design. It's very impressive, and would be pretty nice in a 1080p 60fps world. It just seems a poor match to 1440p/144/4k in 2019+, unless you feel like a lot of compromises.

Tensor on the other hand seems entirely like a new compute optimization for emerging business side, hamhandedly pushed onto consumer dies. Whereas Raytracing has very limited uses in professional work outside of extremely specific circumstances, machine learning and projects like OpenAI represent a great opportunity for expanding their footprint in the market.
 
Mar 11, 2004
18,913
1,108
126
#15
Why would NVidia bother including them in their mainstream consumer GPUs then? They'd be better off limiting that hardware to professional cards which sell at much higher prices and not including it in their consumer cards which lowers the cost of manufacturing them.

You could perhaps argue that NVidia isn't getting good enough results from dedicated hardware in order to justify its inclusion, and I might be inclined to agree, but the idea that AMD will be able to get similar results with generalized compute hardware is just wishful thinking. If AMD were that good and NVidia that incompetent, the GPU market would look completely different than it does right now.
I think you're missing my point. They put that stuff in there to begin with for those higher end customers, but are trying to sell those same GPUs to normal consumers, and so they're trying to find uses for that stuff to justify the extra costs it brings (and that's why we're getting this half-baked RTX and DLSS stuff, because Nvidia rushed to make a case for that stuff in their gaming GPUs in order to try and justify these large, hot, and expensive GPU chips they're slapping on gaming cards).

Seemingly they think they need to sell the same GPUs in both markets versus making special ones for each and just disabling that stuff. Which, have fun trying to justify a more expensive and power hungry chip that doesn't bring performance improvements worth the extra costs. Heck, look at the backlash even with them finding uses for that stuff. Imagine if we had the same RTX GPUs but Nvidia wasn't also marketing a holy grail feature like ray-tracing.

We'll just have to disagree. Again, you're ignoring that I'm saying RTX cores are likely not nearly as specialized as Nvidia would have you believe. I didn't say AMD would get the same results (I said its possible for them to use it for the same thing, not get the same results; although I wouldn't be surprised that they could possibly outdo Nvidia on it if they had the software development, but that's probably the single biggest area they're lacking compared to Nvidia). I mean, AMD and Nvidia have similar support for graphics, yet they don't get the same results. I don't believe things are really much different in other areas and likely the difference simply comes down to the software.

I disagree that their compute is actually much different than Nvidia's (no clue where you're getting this "generalized compute hardware" that AMD has compared to Nvidia), and I think we'll find RTX is just more compute stuff (that was there for things other than ray-tracing, but Nvidia is touting it for ray-tracing because it can be used for that) and is not nearly as specialized as you'd think (its telling that, I believe Nvidia has still not given any real detail about what an RTX core is, and I think that's because they're trying to hide that it isn't some super special secret magic ray-tracing bit).

AMD has often had substantially higher theoretical performance (and often that is heavily favoring their compute side since around GCN 3). I think Fiji and Vega both had like 30% higher theoretical performance compared to similar Nvidia products (Fiji compared to the 980Ti/Titan, and then Vega compared to Pascal Titan Xp, although Titan V was not much lower, but it was still lower than Vega 10, but even Titan RTX has lower theoretical performance than Vega 10). If you look at how strong Vega is in compute workloads, you see that. But, because of AMD's substantially worse software development/support, they rarely ever realize that. We've seen AMD struggling on the software side with their GPUs, across the board. That was why they tried to push Mantle, because it had the potential to better utilize their hardware, but they couldn't get the traction. Heck, even with Microsoft (DX12) and the open source community (Vulkan) moving that way, its still lacking in traction (because it requires more work, likely that is going towards implementing it in next gen engines).

Most of the rest is marketing. Like JHH trying to claim that AMD can't do ray-tracing or AI (he literally said Radeon VII "there's no ray-tracing, no AI" evne though that's straight up false). They've been doing both on their GPUs (just not in consumer space). And Vega does well in Tensor loads for instance (Vega 10 for instance I think was only topped by that $3000 Volta chip), but Nvidia would have you believe that isn't the case.

I think Microsoft's vision of ray-tracing was always a slow transition. Nvidia is just simply trying to market it as though they're already there. They clearly are not at all. But they want to make people think they're the only ones capable of ray-tracing right now, but Microsoft has straight up said DXR can run on hardware that already existed.
 

ozzy702

Senior member
Nov 1, 2011
986
203
136
#16
The next cycle of consoles won't use ray tracing in anything but extremely limited scenarios if at all. They aren't going to be powerful workhorses and AMD doesn't have dedicated ray tracing hardware like NVIDIA going into the consoles. They'll target true 4k @ 30fps as cheap as possible, maybe 4k 60fps with some fancy to the metal programming. For AMD to pull off ray tracing anywhere near as "good" as the current RTX cards they'll either need dedicated hardware or a massive increase in power, way way beyond Radeon VII. It's not going to happen on the desktop for AMD anytime soon, let alone consoles.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
4,525
458
126
#17
I think you're missing my point. They put that stuff in there to begin with for those higher end customers, but are trying to sell those same GPUs to normal consumers, and so they're trying to find uses for that stuff to justify the extra costs it brings (and that's why we're getting this half-baked RTX and DLSS stuff, because Nvidia rushed to make a case for that stuff in their gaming GPUs in order to try and justify these large, hot, and expensive GPU chips they're slapping on gaming cards).
No, I think I understood what you're saying, but I just believe that for NVidia to act that way would be foolish. They already have a separate card for compute (Tesla) and if their professional cards (Quadro) really needed this technology they'd not only be extolling the virtues of it for those workloads, but could also afford to create a separate card specifically for that.

Their mainline GeForce cards sell in volumes that are orders of magnitudes higher than these specialty cards. It makes no business sense to develop for your more niche markets and to then try to generalize. If NVidia wanted to charge higher prices to consumers, they could have spent the additional transistors on more CUDA cores and have even higher performance which is what consumers care about the most.

If AMD were able to execute properly with their own graphics right now, they could absolutely smash NVidia, but unless Navi has some serious changes, it's unlikely that AMD will be able to punish NVidia in any significant way. I could certainly accept that NVidia failed to execute on their plan and their results have fallen well short of where they would have liked to be, but for them to intentionally do what you're suggesting doesn't make sense. You don't almost double the number of transistors for all of your chips in order to produce something that's only really needed by a tiny percent of your customers and try to flog it off as useful to everyone else. That simply doesn't happen.

The better explanation is that NVidia thought they were in a position to define what GPUs needed to be in the future (i.e. ray tracing capabilities) but couldn't get the performance they needed in order to make the feature compelling. If AMD comes in with a leaner architecture, expect NVidia to drop the specialized hardware in favor of more CUDA cores that will make them more competitive.
 

ozzy702

Senior member
Nov 1, 2011
986
203
136
#18
The better explanation is that NVidia thought they were in a position to define what GPUs needed to be in the future (i.e. ray tracing capabilities) but couldn't get the performance they needed in order to make the feature compelling. If AMD comes in with a leaner architecture, expect NVidia to drop the specialized hardware in favor of more CUDA cores that will make them more competitive.
Unless AMD works some kind of magic and makes a 3 generation leap forward, I don't see them catching up with NVIDIA in efficiency which means NVIDIA can still run RTX. I fully expect the % of GPU die dedicated to RTX to drop in the 3000 series, but I'll be surprised if NVIDIA drops RTX ever. They're going to move forward with it, refine it and I think we'll see the 3000 series roll out go a LOT smoother. The 2000 series was NVIDIA's first flub in a long time, I don't expect to see them make another big mistake anytime soon.
 
Sep 9, 2017
79
17
41
#19
Unless AMD works some kind of magic and makes a 3 generation leap forward, I don't see them catching up with NVIDIA in efficiency which means NVIDIA can still run RTX. I fully expect the % of GPU die dedicated to RTX to drop in the 3000 series, but I'll be surprised if NVIDIA drops RTX ever. They're going to move forward with it, refine it and I think we'll see the 3000 series roll out go a LOT smoother. The 2000 series was NVIDIA's first flub in a long time, I don't expect to see them make another big mistake anytime soon.
Exactly, you can't expect Next-gen consoles to run games at native 4K, with improved graphics over current-gen's and rendering ray tracing all while being efficient and cheap.

Unless they resort to checkerboard rendering and implementing ray tracing in very specific and highly optimized situations like what Polyphony is doing.

If it isn't offering the level of visual improvement like what we're seeing in Metro Exodus, it's not going to be worth it.
 

maddie

Platinum Member
Jul 18, 2010
2,736
697
136
#20
No, I think I understood what you're saying, but I just believe that for NVidia to act that way would be foolish. They already have a separate card for compute (Tesla) and if their professional cards (Quadro) really needed this technology they'd not only be extolling the virtues of it for those workloads, but could also afford to create a separate card specifically for that.

Their mainline GeForce cards sell in volumes that are orders of magnitudes higher than these specialty cards. It makes no business sense to develop for your more niche markets and to then try to generalize. If NVidia wanted to charge higher prices to consumers, they could have spent the additional transistors on more CUDA cores and have even higher performance which is what consumers care about the most.

If AMD were able to execute properly with their own graphics right now, they could absolutely smash NVidia, but unless Navi has some serious changes, it's unlikely that AMD will be able to punish NVidia in any significant way. I could certainly accept that NVidia failed to execute on their plan and their results have fallen well short of where they would have liked to be, but for them to intentionally do what you're suggesting doesn't make sense. You don't almost double the number of transistors for all of your chips in order to produce something that's only really needed by a tiny percent of your customers and try to flog it off as useful to everyone else. That simply doesn't happen.

The better explanation is that NVidia thought they were in a position to define what GPUs needed to be in the future (i.e. ray tracing capabilities) but couldn't get the performance they needed in order to make the feature compelling. If AMD comes in with a leaner architecture, expect NVidia to drop the specialized hardware in favor of more CUDA cores that will make them more competitive.
You said.
"The better explanation is that NVidia thought they were in a position to define what GPUs needed to be in the future (i.e. ray tracing capabilities) but couldn't get the performance they needed in order to make the feature compelling."

If this is to be true, then why has Microsoft worked with the industry to formalize a RT pathway for DX12?

I suggest that everyone in the industry agreed that RT would be the future but Nvidia thought they could control said future by being 1st out the box with propriety hardware and software. If you change the word define to control in the statement, then I agree.
 
Oct 27, 2006
19,794
302
126
#21
Clearing the path for API standards and feature implementation is not an indication that old designs can reasonably run the features. Semiconductor development on the level of complexity involved in modern GPUs is almost ludicrously complex and subject to increasingly unpredictable obstacles during die shrinks and new process tech with their outsourced fabs such as TSMC and GF. In the real world here, Navi is long since 'done' in terms of fundamental design and feature set. AMD is almost obnoxiously vocal about their successes and features, be they great (Ryzen) or not so great (Polaris/Vega/Vega7).

If AMD knew they could do competitive raytracing with Navi, they'd be only too happy to add to the rain on Nvidias parade at present, but they're not. Not even for the huge Navi 20 idea. Let alone the little Navi 10 that will be the foundation of PS5 and XBXX.

It's not impossible, but every bit of my intuition and experience watching GPU and pre-GPU semiconductor tech from the very early days of 8088/8086, z80, 6502 days is telling me that Navi Raytracing would be even less impressive than RTX 20xx raytracing, and doubly ineffective on the weaker console APU, as the inferred method would necessarily demand that you trade precious GPU cores and bandwidth away from a design that will already be borderline for 4k AAA gaming in the 9th gen.

The math doesn't work.
 

Mopetar

Diamond Member
Jan 31, 2011
4,525
458
126
#22
Unless AMD works some kind of magic and makes a 3 generation leap forward, I don't see them catching up with NVIDIA in efficiency which means NVIDIA can still run RTX. I fully expect the % of GPU die dedicated to RTX to drop in the 3000 series, but I'll be surprised if NVIDIA drops RTX ever. They're going to move forward with it, refine it and I think we'll see the 3000 series roll out go a LOT smoother. The 2000 series was NVIDIA's first flub in a long time, I don't expect to see them make another big mistake anytime soon.
It’s not impossible. Zen showed that AMD could catch up to Intel pretty damned well. With their third generation parts AMD might even surpass Intel.

The problem AMD has with their GPUs is that they zigged when NVidia zagged and they’ve lacked the budget to overhaul their architecture as radidly as NVidia.

GCN was very much a response to NVidia going compute heavy with their GPUs and early on AMD was doing better competitively. However, NVidia had the resources to make a separate line of cards for the professional market and focused on making a better pure gaming card. As a result they faired better in the market than AMD who had cards that could theoretically be better if anyone bothered to make games that could tap into that compute power, but most companies didn’t care enough to bother.

NVidia already has their 3000 series in the pipe so it’s possible they’re similarly locked in to a bad strategy. They’ll be able to correct course more quickly than AMD could, but a 7 nm Turing doesn’t magically make the RT and Tensor core silicon more efficient. Sure they can pack more of it into a chip, but it will be horrendously expensive and a lot less competitive than an AMD GPU if they focus on making something that’s more of a pure gaming GPU.

When AMD does launch their next generation architecture, they will be much closer to parity with NVidia. They’ve clearly got talented engineers, but you can only take GCN so far. AMD will make an eventual return to glory, it’s just a matter of how long it will be until we see it.
 

ozzy702

Senior member
Nov 1, 2011
986
203
136
#23
It’s not impossible. Zen showed that AMD could catch up to Intel pretty damned well. With their third generation parts AMD might even surpass Intel.

When AMD does launch their next generation architecture, they will be much closer to parity with NVidia. They’ve clearly got talented engineers, but you can only take GCN so far. AMD will make an eventual return to glory, it’s just a matter of how long it will be until we see it.
Intel had horrendous delays on top of getting complacent. NVIDIA has been more competent in their execution and forward thinking than Intel, so I don't think it's an apt comparison. AMD "may" catch or slightly surpass Intel's four year old architecture while using a state of the art, best available process.

In addition to 7nm I'd expect refined RT and TEnsor cores, and likely a difference balance of CUDA to RTX silicon. Sure, they'll be expensive, but supposedly there's a ton of capacity and both AMD and NVIDIA know that consumers will pay big $$$ for GPUs. I expect plenty of low and mid range 3000 series offerings to be GTX cards as well, ala GTX 1660.

We'll see. I wouldn't hold my breath on anything spectacular coming from AMD on the GPU front anytime soon. Navi may close the gap, but they're way behind NVIDIA in every metric for consumer gaming GPUs. I hope I'm wrong, I loved my 7970s and would love to own another AMD GPU for something other than mining.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
4,525
458
126
#24
If this is to be true, then why has Microsoft worked with the industry to formalize a RT pathway for DX12?

I suggest that everyone in the industry agreed that RT would be the future but Nvidia thought they could control said future by being 1st out the box with propriety hardware and software. If you change the word define to control in the statement, then I agree.
Because Microsoft wants new features so that they can push people to new versions of Windows. Companies do this all the time, but it's pretty clear that there isn't a lot of interest in doing RT unless NVidia drops a fat sack of cash in front of you to add it to your game.

You don't have to look too far back into NVidia's history to find other instances of the behavior that you describe. I can easily believe that they would try to get out in front of an industry standard. However, I think they've flubbed it here.

Intel had horrendous delays on top of getting complacent. NVIDIA has been more competent in their execution and forward thinking than Intel, so I don't think it's an apt comparison.
Turing is in some ways complacency on the part of NVidia. They obviously intended for performance to be much better, but they're clearly off their usual game here, and I don't think this would have happened if AMD were more competitive. Regardless of performance, the prices certainly indicate NVidia being complacent.

I don't think NVidia is quite screwed, because it seems like even if they wanted to stay the course with their dedicated RT hardware, they could probably drop the tensor cores to save a lot of space. DLSS clearly didn't pan out and makes me think the SS stands for super smearing. Jettison that for additional RT cores or more CUDA cores and I think they'll have something workable.
 

ozzy702

Senior member
Nov 1, 2011
986
203
136
#25
Because Microsoft wants new features so that they can push people to new versions of Windows. Companies do this all the time, but it's pretty clear that there isn't a lot of interest in doing RT unless NVidia drops a fat sack of cash in front of you to add it to your game.

You don't have to look too far back into NVidia's history to find other instances of the behavior that you describe. I can easily believe that they would try to get out in front of an industry standard. However, I think they've flubbed it here.



Turing is in some ways complacency on the part of NVidia. They obviously intended for performance to be much better, but they're clearly off their usual game here, and I don't think this would have happened if AMD were more competitive. Regardless of performance, the prices certainly indicate NVidia being complacent.

I don't think NVidia is quite screwed, because it seems like even if they wanted to stay the course with their dedicated RT hardware, they could probably drop the tensor cores to save a lot of space. DLSS clearly didn't pan out and makes me think the SS stands for super smearing. Jettison that for additional RT cores or more CUDA cores and I think they'll have something workable.
I disagree that NVIDIA has become complacent in regards to hardware development. Turing has advances outside of RTX such as the ability to process both floating point and integer instructions simultaneously, which in many cases is a huge uplift in efficiency in titles like Wolfenstein II with the 2080 destroying the 1080TI. Yeah, NVIDIA took a huge risk and introduced unproven technology with RTX, but that's not complacency and only time will tell if that risk was wise or not. At the moment it sure doesn't look like it was, but the tech is promising and I'll reserve judgement for the future. The fact of the matter is AMD is VERY far behind and we have little information regarding NAVI so everything is speculation. We know more or less what NVIDIA's 3000 series will bring, and it should be great from an performance standpoint albeit with likely less performance/price than any of us would like to see.
 
Last edited:

Similar threads



ASK THE COMMUNITY