So you are saying that PS5 will have a better RT performance than an RTX 2080Ti?
This also doesn't change much, as I'm very skeptical that AMD would massively eclipse say a 2080 in ray tracing capability in a console
Yes, the consoles will likely have much more peak RT throughput than a 2080. There is an asterisk, though -- based on patents and leaked statements, the nVidia and AMD implementations are fundamentally different: nVidia has entirely separate "RT" cores, which do fixed-function RT and nothing else. Instead of doing this, the AMD solution appears to be to add intersection test hardware to their TMUs, and then run the outer loop of the raytracer in shaders.
Using only the existing DXR interface, difference between these approaches is that nV has a fixed small amount of each GPU dedicated to RT, and the rest doing normal graphics, and how much you use either one has little impact on how much you have the other available. In contrast, on the AMD side it's possible to directly trade off between running more RT or normal shaders. So the peak RT throughput when doing nothing else would likely be much higher for the AMD GPU, but on the other hand, if running mixed RT pipeline, and the split between RT and traditional computing power works for you, the nV approach is probably more efficient ( = more throughput for the same power).
The AMD approach is also particularly interesting in consoles in that they will probably allow us lowly programmers to mess with the shader that does the RT loop. And there are all kinds of interesting opportunities this opens up...
I can't see what ray tracing actually is. I have looked at a few video, but, I do not notice anything when comparing RTX on or off.
Ray tracing is a fundamentally different approach to rendering graphics. To grossly simplify it, rasterization (the traditional way that almost everything uses) works by going through the list of all polygons in the scene, figuring out if a polygon is visible, and if yes then transforming it to fit the screen and drawing it there. Ray tracing works by starting at each pixel in your screen and "shooting a ray" into the scene, figuring out which polygon you hit first, and then drawing that pixel.
Both approaches have their strengths and weaknesses. The big strength of rasterization is that it is more amenable to hardware acceleration. Doing linear passes in memory is just fundamentally more efficient than picking things here or there. This is why it's the traditional approach. The weakness of it is that there is no way to do proper physically realistic lighting -- all lighting systems for rasterization are hacks, some look better than others. Also many things that light does easily in reality are really hard/expensive to implement in rasterization -- have you ever wondered why there are so few mirrors in games?
The big strength of RT is that as you are essentially simulating the path of the photon, only backwards, so what you are doing is much closer to physical reality. This means that doing lighting "right" is almost the easiest way, and it's fairly straightforward to implement any kind of manipulation in light that happens in the real world. This is why raytracing is used a lot in doing effects for movies -- it can provide scenes that are actually photorealistic, in the literal sense of indistinguishable from photographs, or reality.
Why are all the existing RT effects so lame then? Because right now, practically no-one owns RT-capable hardware, and as always, no-one makes games for just the high end. Support from nV and technical curiosity is worth adding a few interesting RT effects into games, but little else. Actual broad use of RT will follow the consoles, because they will be the first platform where every customer can be counted to have access to it. How pervasive it will actually be will of course depend on exactly how good will the consoles be at it, of course.