Turing just really seems like it was Nvidia trying to shoehorn a justification to consumers to buy these big chips. I'm not declaring anything (Nvidia is still in a solid position, AMD has yet to provide anything to really concern them, and Intel will take time to get up and running well - and I even have wonder if we might not see a big IP battle happen once Intel releases their GPU), but that Nvidia felt they needed to push these cards (that to me, seem like they're pro cards, render cards, but still pro cards and that RTX was intended for those markets and not consumers til 7nm) to gamers, makes me wonder if there's not trouble with their 7nm plans, or their plans in general. I don't know if Volta was supposed to make it to consumers, but we never got Volta gaming cards. Honestly, I think Pascal was still more than enough for gamers (just with some price drops, or maybe porting it to 12nm, especially if they'd used it to add more processors), to hold things over til 7nm. But this makes me wonder if we'll be seeing that anytime soon from Nvidia outside of maybe some high end Enterprise (like we got with Volta). Or if they found out that there was such little demand for Turing in the pro market that the only way they could recoup the development cost and get good enough economies of scale was to push them to gamers too.
Implications are that we can see a modest amount of RT by the other side aka AMD used a lot more than thought possible. The arguments that the lead is multi-generational for the RT tech falls flat.
Considering how half-baked this ray-tracing API stuff is, had a hunch that it'd be best dealt with by implementing it in the traditional raster pipeline, initially with software, and then figuring out how the hardware needs to change to improve performance. So basically you'd be better off putting those transistors to work just adding traditional rasterize cores and brute forcing as much as you can. On top of that the DLSS stuff seems like it'd be the same way, where you just have supercomputer come up with an algorithm to offer the best perceived quality for a few targets (for instance native resolution of the display; could factor in viewing distance as well) and then adjust the game settings for that, with no need for specialized function hardware other than maybe on the cloud/server side that is doing the actual deep learning analysis).
And the bonus being that traditional games would also see a boost (due to the extra grunt of the added processors). Plus you could do DLSS on older games and then tailor the settings to provide that image quality as well. But with how it is now, its basically relegated to RTX cards and they seemed to have emphasized it as a forward facing feature. It makes me wonder if this isn't their "fix" for how people have claimed that Nvidia's cards suffer in performance over time as they focus resources on the more recent architectures, letting their older ones languish - with some claiming its outright intentional and/or they deliberately sabotage older performance to push people to newer ones (I'm not saying I agree with those claims but they do exist, and this way instead of trying to have people run GFE stuff to figure out what settings to use they just turn DLSS algorithm on).