Right, though it doesn't really mean they have nothing to release, just nothing that is Turing competitive. Does anyone expect AMD to have a Turing competitive card that is either Polaris, Vega, or Navi? We already know Navi is more than a year away.
Something could be released...maybe...but probably quietly because it isn't going to compete with Turing in performance. It could offer a price/performance option compared to 2 year-old Pascal which is currently Turing's only competitor.
We do?
Too much being bet on Navi here, there's a bit of an assumption they'll just be able to add all the stuff Nvidia has added and then release a competitive card in late 2019-early 2020. Seems very unlikely. Just the DLSS will be very hard to do as it's not good enough to just put some AI cores on your card, you also need the AI super computers to generate the code to run, and that in turn requires all the software to run on the super computers. The AI cores on the card, the design of the AI super computers themselves and all the code is Nvidia proprietary stuff. Nvidia even own the super computers and provide all the time on them required to generate DLSS algorithms. AMD need all of those things to duplicate it.
In addition by the time AMD have Navi you can assume Nvidia will have the 3 series with top to bottom DLSS support, most games will support DLSS, and it gives something like a free 50% performance boost over not using it. So without a DLSS competitor you can't compete?
That's not even thinking about ray tracing, which might be a MS standard but that doesn't make it easy to do, and it is also using DLSS to de-noise the image so you really need to have your DLSS copy first.
Obviously it's not impossible for AMD to achieve this, but being as they haven't really released a fundimentally architecture since GCN
I'm not sure where you're getting that from? Maybe other people are expecting more, but most of what I read is clear in indicating that Navi is a mainstream class GPU, so I'm not expecting a big ray-tracing/AI aspect to it.
That's assuming DLSS actually lives up to the hype. AMD could release their own thing and leave it up to the game developers. Plus, I'd imagine that Microsoft would probably play a role in that (seeing how that would benefit them, it'd benefit Xbox and it'd benefit PC), so its not like there's nothing for AMD to leverage there. In fact, because of their work with Microsoft, I have a hunch that AMD will have plenty of insight into some of this (ray-tracing especially), since Microsoft can dictate it more than anyone. Which I'd expect Microsoft would want to be fairly standard (meaning they want anyone making a GPU to be able to meet compatibility).
Why do you say that? We have no idea what Nvidia's plan is. We don't know when they'll be on 7nm. And we don't know what chips they'll make when they are on there. Where are you getting that from (that DLSS is free 50% performance boost)?
You're assuming that Nvidia's ray-tracing + DLSS will be the best option. Personally, I'm not at all sold on it. There's potential, but that doesn't mean it'd be the end all be all. Honestly, right now, DLSS just looks like another kinda cheat, which there's almost always alternatives for achieving. But it also to me, means, inherently that it will be flawed versus just outright rendering it natively. Which I'm not saying Navi is going to brute force the RTX cards rendering capabilities (in fact, I personally don't even know why people are comparing them, RTX are large pro-level GPUs that Nvidia is leveraging Microsoft's new ray-tracing API, and their own AI image analysis so that they can attempt to sell them to gamers even though they didn't do that much to bolster the traditional rendering capabilities of them; Navi has been touted as being a mainstream class GPU, it doesn't need to be chasing all those features, it just needs to be a solid card for rendering games as they are now at an affordable price).
What? GCN has changed quite a bit over time, and people keep refusing to see that AMD had to approach multiple markets with basically a singular GPU design, and thus while they boosted the rendering capabilities, more of the GPU seems tailored for pro-level markets on their higher end products. And the fundamental stuff has little to nothing to do with the stuff you're talking about too, so I'm not even entirely sure why you're commenting on that aspect. If they left the traditional graphics pipeline of Vega in tact, that would be a big boost for a mainstream GPU. And there's likely some tweaks they can make to improve it further (without drastically overhauling it). So just maximize the traditional rendering capability, limit bottlenecks (memory bandwidth), push clock speeds (but keep them out of the inefficient range) and they could make a GPU that could match if not outdo Vega at much reduced size, power, and price. Gamers would buy it. Just like they bought Polaris.
Not that I understand all of it, but Vega remains only ~60-70% "functional" considering "all of the stuff" that remains inactive on those (1st gen?) chips. Is it not plausible that a more competent team can work this design into whatever Navi is, refine it, put together better software and actually score some developer interest in making it work? It would be interesting to see what a supposed full-performance Vega would actually look like, if such a unicorn could materialize.
Though I'm not sure if that matters much or is in any way relevant to whatever hard limits are determined by GCN design.
At the same time, they pretty much pulled that miracle "from nowhere" with Zen, so I dunno, we'll see. I still maintain that Zen is their real focus right now because it remains a far greater market than the one Nvidia is determined to dominate.
I'm not sure where you're getting that. Now, some of the big features (NGG fastpath or whatever their advanced discard thing was) aren't there, but I don't think that was really hardware, rather the software (just that the Vega pipeline was very programmable, and it could basically discard early enough that it wouldn't then stall the pipeline, and so it would discard a good amount - cutting down the amount of geometry, then they could process the new culled geometry, before final shading, all in a single clock cycle when before they could cull geometry, then they'd have to process the new geometry taking 2 clock cycles). And there might have been another feature or two, but I got the impression a lot of that was more on how AMD's driver handles thing (and Raja was building it to be more like Nvidia, where they add features through software, then work to implement them better in hardware).
Which Glo (?) has linked to patents that make me think of something like that, where they can process more geometry than the base GPU would seem to indicate. So no need for the complex software setup (although if they could get that working, along with the hardware geometry stuff, it could be a very substantial improvement, but the hardware aspect might mean its easier to implement a version of the other as well), or potentially even major changes to the setup of the GPU.
Which, I get people going "yeah, well AMD has to prove it before you should take that seriously", which is very true. I mean they talked up the NGG fastpath stuff, and then ditched it. As always, it comes down to how the product performs and at what price. Navi doesn't have to be some RTX killing GPU, it just needs to be solid, which AMD could just make a 4096 processor (whatever they call theirs) GPU, that is quite similar to Vega, and then rely on pushing the clocks to around 2.0GHz, and then just try to get about 500GB/s (or higher) memory bandwidth, and it should be a solid mainstream level card without any particularly fantastic work necessary. Couple that with improvements to stuff like color compression, and Navi should easily be able to be a solid gamer's GPU.
The thing is, pro market uses actually do tend to scale up (meaning they could address that by adding GPUs, making dual GPU cards and/or adding cards whereas Crossfire/mGPU for games requires a lot more work for less return). And AMD's GPUs (Vega for instance) are already pretty great at some of those tasks (so no heavily specialized hardware needed), so they don't even necessarily need to be doing what Nvidia is. And even for stuff like ray-tracing and AI inferencing, there's other options (maybe AMD does Crossfire/multi-GPU there, as a means of sorta brute forcing it, where even though the graphical rendering might not scale up, other aspects might, so if you want ray-tracing or to utilize some inferencing, you add a second card; which that might bring renewed push for improving multi-GPU rendering - there's rumors about doing per eye rendering for VR, so there's some potential things that might call for it).
There was an article I read (think it was when I was checking some console related thing), think it was Tweaktwon or Extremetech (I'd have to see if I can find it again), where their source said AMD was aiming for a Q1 Navi release, but that 1H (so Q2) would be more realistic. Just like I think the PS5 is closer than people think, I think Navi is as well. AMD is looking to leverage their 7nm jump.