NVIDIA RTX demo is also using path tracing.
Yes it is, but:
1. I highly doubt they managed to recode all the demo shaders in proper DXR 1.1 in the 2 weeks, using an optimal
1SRT implementation. It was probably ran on DXR 1.0 (which is a more straightforward port) and therefore it ran non-optimally on RDNA hardware. And Yes I'm aware that Turing supporst DXR 1.1, but I've got a hunch, it doesn't benefit from the new programming model as much as the new consoles do (while I'm as certain that Ampere does). There is a big difference:
2. The Xbox Series X is overall surprisingly competitive to High-End Turing (compare Gears of War for instance). Yeah it might happen that it can't beat 2080 Ti in ray-tracing (the best Turing has to offer this generation), but we certainly don't know yet from this apples to oranges comparison. Optimizing code to a certain GPU-architecture takes time (just consider how long RT Minecraft has been in development), it's too early to throw out 100% certainties. Furthermore, if you want to extrapolate AMD Desktop RDNA performance from Xbox, bear in mind the that won't be a 52CU fixed-clock GPU. Big Navi is rumored to have 80CUs and certainly has more bandwidth (which is a very big necessity for RT).
3. Compare to "your own size". Nvidia RTX 2080 Ti is a
754 mm² GPU Even when shrunk to 7nm, that monster would be larger than the entire Xbox chip (360.45
mm²), let alone the GPU part. When comparing the
980 Ti (28nm 601 mm²) to
1080 (16nm 314 mm²), which a simillar shrink, Nvidia achieved to almost halve the die-size, but also cropped memory channels from 384 bit to 256 bit and removed 512 shader cores in the process from the die (980 Ti has 2816, but it has some disabled, the chip itself has 3072).
My point being. Considering the cost and power constraints, AMD did really well here (especially when considering where they are coming from). Poo-pooing this arch as the worst BS ever, simply because it doesn't trounce highest end GPUs is ridiculous.
I have no doubt Nvidia could achieve similar (probably slightly better) results from a ~300 mm² 7nm Ampere, but that doesn't lessen AMDs achievements.
TL;DR:
I guess the glass is always half-full to some. "Oh noes, the new console doesn't (possibly) beat the socks off from the current >1100$ GPU in absolutely everything, merely being close. What a heap of utter garbage!" I wonder how do you Rate PS4/Xbox One at launch then? Those were miles worse compared to existing desktop GPUs at the time.