Question Ray Tracing is in all next gen consoles

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Muhammed

Senior member
Jul 8, 2009
453
199
116
PS5 is now confirmed to have hardware RT, meaning Ray Tracing will now the next standard in making games. RTX will spread into even more games. I am interested to know what those who thought RT will never be mainstream now think?



The straw man in your question is trolling.

AT Moderator ElFenix
 
Last edited by a moderator:
  • Haha
Reactions: JPB

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Probably not. It likely just means it can run games at double the framerate of the XBox One X, so it'll probably be something similar to a 5700 XT.

At this point it is safe to assume, that we are looking at at least 12TFlops. It needs to be quite a bit faster than 5700XT plus it'll have HW raytracing acceleration. Personally i would have liked NVidia providing the GPU, but alas they are going with AMD.
 

DeathReborn

Platinum Member
Oct 11, 2005
2,746
741
136
The GPU in the Series X does seem like a step above the 5700XT but unless it has a memory bus bigger than 256bit I fear it will be bandwidth limited, especilaly with RT active.
 

SteveGrabowski

Diamond Member
Oct 20, 2014
6,893
5,827
136
At this point it is safe to assume, that we are looking at at least 12TFlops. It needs to be quite a bit faster than 5700XT plus it'll have HW raytracing acceleration. Personally i would have liked NVidia providing the GPU, but alas they are going with AMD.

12 TF FP32 was the rumor that I had heard a long time ago. It works out since the XBX is 6.

I think Spencer would have been screaming from the rooftops if it was a 12TFLOPS gpu instead of giving a kind of vague double the gpu power answer. I don't buy for a second XBox would have a gpu with 20% more computational power than RTX 2080.
 
  • Like
Reactions: psolord

Ottonomous

Senior member
May 15, 2014
559
292
136
AMD isn't exactly cheap either these days. Not like the glory days of $200 R9 290.
Interestingly a Velka 3/5 based 2060/5700 build would be a great investment against the new Xbox. Same form factor and probably a massive premium upfront but the upgradeable components would make up for it

I would be weary about buying the first iteration of the new console anyway, knowing I might buy an updated version 3-4 years on for the better gaming experience.
 

soresu

Platinum Member
Dec 19, 2014
2,662
1,862
136
Personally i would have liked NVidia providing the GPU
I don't know what it was like with MS with the original Xbox, but from what I've read Sony's relationship with nVidia over the RSX was not a positive one over the PS3's lifetime.

That, and the BW compat bonus of staying AMD to benefit from RDNA uArch compatibility with legacy Wave64 code means it was a done deal - their software catalogs will span everything now.
It needs to be quite a bit faster than 5700XT
Considering PS4 Pro is slaughtering XB1X in sales despite a 1.8 TFLOP disparity in FP32 perf - I'd say absolute perf is less important than good exclusives with great design and artistry (Substance Painter/Designer use has helped hugely on the artistry side recently IMHO).

Yes I know about PS4P's 8.4 TFLOPS FP16, but the gains are not that big going by the few big devs that announced figures on FP16 RPM benefits.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
I don't know what it was like with MS with the original Xbox, but from what I've read Sony's relationship with nVidia over the RSX was not a positive one over the PS3's lifetime.

That, and the BW compat bonus of staying AMD to benefit from RDNA uArch compatibility with legacy Wave64 code means it was a done deal - their software catalogs will span everything now.

I'd argue that backwards compatibility has more advantages at the development side rather than for the end users. With projects getting bigger, the developers can't keep affording to constantly refactor the lowest level parts of their codebase because otherwise you'll end up with toxic platforms such as Apple's where they'll deprecate OpenGL and 32-bit applications to stop running altogether until the said authors behind the application transitions to Apple's newest shiny low-level frameworks but most will just simply opt out of that hell ...

With backwards compatibility, developers won't have to worry about when or even if their applications will work in the foreseeable future or not because platform vendors will promise to protect their previous investments ...

We can't have the holy trinity of performance, binary portability, and low maintenance code. With console vendors tying themselves down to AMD hardware, they are making a reasonable tradeoff by sacrificing binary portability for high performance code and low maintenance code. Sure AMD may not be perfect but what they're offering is arguably more sound to their customers compared to their other competitors. Nvidia ? Who on earth would want to maintain their monstrosity of a driver stack full of hacks when AMD already has by far a bigger driver team compared to all the main console vendors ? (Not even Nintendo wants to maintain their drivers so they opted out to create NVN instead which means they aren't all that much concerned about backwards compatibility since it's clearly not binary compatible with Nvidia's newer hardware designs)

I see backwards compatibility as more of a way to enhance developer productivity since the feature is highly conducive to long-term investment. PC centric developers won't have anymore excuses to not release their applications on consoles since the vendors behind them can guarantee that their work won't be as easily scrapped ...
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
AMD isn't exactly cheap either these days. Not like the glory days of $200 R9 290.

I think the console manufacturers are getting a deal for less expensive parts since they're in part subsidizing R&D for AMD's graphics division. A lot of people are quick to point out that they don't spend nearly as much as either Intel or NVidia even though they're making both CPUs and GPUs (though to be fair both Intel and NVidia have other product categories) and that they manage to be competitive at all is something of a marvel. I suspect that the dirty secret is they made a deal with the console manufacturers that allowed them to get by with that kind of shoestring budget.

Couple that with the hardware security fraca with Tegra X1 on the Switch, and they really seem to suck as a hardware partner for console makers.

Apple quit working with them as well. NVidia makes great hardware, but they definitely get marked down in the "plays well with others" part of their report card.
 

SteveGrabowski

Diamond Member
Oct 20, 2014
6,893
5,827
136
Then you need to do better searching, there are great deals from amd, the RX 570 up to the 5700X:
Top 5 Best GPUs of 2019, RX 5500 Update
spoiler: Polaris lives ;)

Also the card that you typed the R9 290 was 399$ not 200$...

R9 290 regularly sold for $230 from a few weeks after the GTX 970 came out in 2014 until when the R9 390 released in the summer of 2015. In November of 2014 you could even find high end ones like the Sapphire triple fan (I forget the model name now) R9 290 for $200. It also came with 4 games from the Never Settle bundle and then the new Civ game was thrown in too around Christmas. It's easily the best gpu deal I have ever seen, it would be like getting a 2070 Super for $250 today.
 
Last edited:
  • Like
Reactions: beginner99

RetroZombie

Senior member
Nov 5, 2019
464
386
96
R9 290 regularly sold for $230 from a few weeks after the GTX 970 came out in 2014 until when the R9 390 released in the summer of 2015. It's easily the best gpu deal I have ever seen, it would be like getting a 2070 Super for $250 today.
You have just resumed amd and nvidia in your post.
Amd can cut price in half of 'one' card at a latter date, nvidia doesn't do that. For example I was expecting nvidia to cut the prices of their non super cards when the super line launched and that didn't happen.
 

ShookKnight

Senior member
Dec 12, 2019
646
658
96
I think my eyes are broken.

I can't see what ray tracing actually is. I have looked at a few video, but, I do not notice anything when comparing RTX on or off.

Nonetheless, I upgraded my gaming PC about 2 years ago... with a card that was already 2 years old. In about a year, I will upgrade/rebuild. But, it will have to be a micro/compact PC. I am done with bulky PCs... but, I am not way in hell going to lean on gimped consoles that get 'upgraded' 3 to 4 times within it's life cycle.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,351
1,537
136
So you are saying that PS5 will have a better RT performance than an RTX 2080Ti?
This also doesn't change much, as I'm very skeptical that AMD would massively eclipse say a 2080 in ray tracing capability in a console

Yes, the consoles will likely have much more peak RT throughput than a 2080. There is an asterisk, though -- based on patents and leaked statements, the nVidia and AMD implementations are fundamentally different: nVidia has entirely separate "RT" cores, which do fixed-function RT and nothing else. Instead of doing this, the AMD solution appears to be to add intersection test hardware to their TMUs, and then run the outer loop of the raytracer in shaders.

Using only the existing DXR interface, difference between these approaches is that nV has a fixed small amount of each GPU dedicated to RT, and the rest doing normal graphics, and how much you use either one has little impact on how much you have the other available. In contrast, on the AMD side it's possible to directly trade off between running more RT or normal shaders. So the peak RT throughput when doing nothing else would likely be much higher for the AMD GPU, but on the other hand, if running mixed RT pipeline, and the split between RT and traditional computing power works for you, the nV approach is probably more efficient ( = more throughput for the same power).

The AMD approach is also particularly interesting in consoles in that they will probably allow us lowly programmers to mess with the shader that does the RT loop. And there are all kinds of interesting opportunities this opens up...

I can't see what ray tracing actually is. I have looked at a few video, but, I do not notice anything when comparing RTX on or off.

Ray tracing is a fundamentally different approach to rendering graphics. To grossly simplify it, rasterization (the traditional way that almost everything uses) works by going through the list of all polygons in the scene, figuring out if a polygon is visible, and if yes then transforming it to fit the screen and drawing it there. Ray tracing works by starting at each pixel in your screen and "shooting a ray" into the scene, figuring out which polygon you hit first, and then drawing that pixel.

Both approaches have their strengths and weaknesses. The big strength of rasterization is that it is more amenable to hardware acceleration. Doing linear passes in memory is just fundamentally more efficient than picking things here or there. This is why it's the traditional approach. The weakness of it is that there is no way to do proper physically realistic lighting -- all lighting systems for rasterization are hacks, some look better than others. Also many things that light does easily in reality are really hard/expensive to implement in rasterization -- have you ever wondered why there are so few mirrors in games?

The big strength of RT is that as you are essentially simulating the path of the photon, only backwards, so what you are doing is much closer to physical reality. This means that doing lighting "right" is almost the easiest way, and it's fairly straightforward to implement any kind of manipulation in light that happens in the real world. This is why raytracing is used a lot in doing effects for movies -- it can provide scenes that are actually photorealistic, in the literal sense of indistinguishable from photographs, or reality.

Why are all the existing RT effects so lame then? Because right now, practically no-one owns RT-capable hardware, and as always, no-one makes games for just the high end. Support from nV and technical curiosity is worth adding a few interesting RT effects into games, but little else. Actual broad use of RT will follow the consoles, because they will be the first platform where every customer can be counted to have access to it. How pervasive it will actually be will of course depend on exactly how good will the consoles be at it, of course.
 

RetroZombie

Senior member
Nov 5, 2019
464
386
96
Yes, the consoles will likely have much more peak RT throughput than a 2080.
Thank you Tuna-Fish for the excellent detailed explanation and information.

I was wondering in what darkswordsman17 and also what you said about the amd implementation.
The nvidia rtx implementation is putting an huge cost to games performance, a game like Battlefield V (ultra settings) at 1440P on a nvidia 2080 super does 111 fps without rtx, and 57 fps with rtx enabled.

If amd RT version gives a much lower overhead (a much lower % of performance lost) they could claim 2080 super performance in the consoles when ray tracing is enabled and do it with a much weaker gpu.

If you are wondering which Nvidia card does 57 fps at 1440P on Battlefield V (high quality) without rtx is something like the GTX 1060 or the GTX 1650 super.
Amd only needs something a little more powerful than those and a much lower performance hit than nvidia. I think it's not impossible...
 
Last edited:

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Yes, the consoles will likely have much more peak RT throughput than a 2080. There is an asterisk, though -- based on patents and leaked statements, the nVidia and AMD implementations are fundamentally different: nVidia has entirely separate "RT" cores, which do fixed-function RT and nothing else. Instead of doing this, the AMD solution appears to be to add intersection test hardware to their TMUs, and then run the outer loop of the raytracer in shaders.

Not sure where you get the idea from, that with the NVidia implementation you dont run the outer loop in shaders - but this is wrong. In fact you can only cast rays from within a shader - which rays you cast is totally determined by the shader program. Likewise what happens when triangle is hit/intersected by a ray is again defined in a shader. This hit-shader can for instance cast additional rays towards the light sources in order to enable shadows.
In essence NVidia gave their shaders the ability to cast rays as an additional instruction/function -> and the RT HW does the intersection test and has the ability to trigger additional shaders based on the intersected primitive dynamically. All the color/per fragment calculation including texture lookup are done as usual - just with the difference that the shader could have been triggered by a ray which was cast by another shader.
 
  • Like
Reactions: Muhammed

Tuna-Fish

Golden Member
Mar 4, 2011
1,351
1,537
136
Not sure where you get the idea from, that with the NVidia implementation you dont run the outer loop in shaders - but this is wrong. In fact you can only cast rays from within a shader - which rays you cast is totally determined by the shader program. Likewise what happens when triangle is hit/intersected by a ray is again defined in a shader. This hit-shader can for instance cast additional rays towards the light sources in order to enable shadows.
In essence NVidia gave their shaders the ability to cast rays as an additional instruction/function -> and the RT HW does the intersection test and has the ability to trigger additional shaders based on the intersected primitive dynamically. All the color/per fragment calculation including texture lookup are done as usual - just with the difference that the shader could have been triggered by a ray which was cast by another shader.

Sorry, I wasn't clear enough. In nV implementation, the outer loop of the raytracer is in shader, but the "cast ray" operation is a single operation. In the AMD implementation, a single intersection test into the acceleration structure is a single operation (implemented in TMUs). That is, there needs to be a shader program that loops over doing that until it has a value.
 
  • Like
Reactions: Thala