This is at railven.
https://www.techpowerup.com/reviews/AMD/Vega_Microarchitecture_Technical_Overview/4.html
AMD actually added a feature that could help DX11 performance too (conservative rasterization). Not sure if it'll actually be backported to support the DX11 implementation or if its just about DX12. Especially noteworthy is that the multi-view rendering is one of the things Nvidia talked up with Pascal, and is what that video was talking about with the discarding. Which from what I gather means that they basically check rendering from multiple angles, and discard as much as they can preventing them from having to discard a lot of the same stuff over and over. That specifically targets some of the tessellation stuff that Nvidia pulled with Gameworks, where they'd jack the tessellation rate to a ridiculous level for no benefit (there's a reason the video mentions culling things that are too small to even show up). And AMD has some Vega specific discard path that is faster or discards a lot more, which apparently so far isn't implemented (although in general Vega has more discard capability than Fiji).
That article also absolutely supports that GCN5/Vega is enabling extra robustness for developers to manage issues in the pipeline (see slide 20). It will obviously be up to developers to really make good use of it though. But it means if they optimize they can both mitigate what would otherwise be stalls or things they'd be stuck waiting to get processed (i.e. discarding instead of fully processing), but also could adjust to get more work done (and seems like they could do this at multiple stages, giving them more flexibility to manage things).
And that is Glo's point (even if I do think he's being pretty optimistic on how much it will improve things, but he's talking about the objective max potential there, not saying it will absolutely happen from what I've seen). There are a lot of features that could make Vega much much better, but it requires developers putting in the effort, and will take time to implement. The reason why he's probably so optimistic is that, from what I'm seeing, this is all stuff that is part of the DX12 featureset moving forward, meaning, Nvidia will almost certainly be doing the same (in some instances already is), so its not like developers are likely to ignore it. Unless Nvidia really starts blackboxing stuff and basically does the Nvidia specific paths (while also putting in the work in the driver), developers putting in the work will benefit both. If I'm not mistaken developers have even spoken out about how blackboxing stuff just ended up causing their games to be broken on release (not even just for AMD stuff either), and generally when we saw a game with Gameworks (especially the broken ones), the sequel had the feature but it was implemented by the developer, and performance vastly improved while quality didn't seem to suffer much if at all.
As for the consoles, the PS4 Pro and Scorpio are really the first ones to offer even some of the new Vega features (but they don't offer the fullset that Vega does, meaning, it'll be the consoles after that will probably offer that; and I strongly disagree with your dismissal of how important being in the consoles is for AMD as it is absolutely responsible for the gains we saw with just some of the DX12/Vulkan features being implemented in games like Doom). And that has been my point, Vega is another forward looking design. So this implementation of the Vega architecture might be a dud (for various reasons, which could be hardware stuff that can't be fixed/improved with software updates and newer software) and never fully show its potential. But it might also be like the 7970, where when utilizing its featureset it could potentially see large gains some time later (meaning, yes, years). Yes, that really doesn't help now (once again, always always always always always buy based on perf/$ when you buy and not future performance), but this stuff is the groundwork for what graphics processing will be doing (Nvidia is already saying they're going to support plenty of this in the future, RPM being one they specifically said Volta will bring). So people dismissing the architecture as a failure when you're not even seeing a lot of its features really utilized (I'd guess that even games/engines that have support for some of this stuff it is quite limited right now).
That's why both Sony and Microsoft have been saying the Pro and One X are not really next gen consoles, as they're more extending the current stuff with a few extra features that won't make a big difference right now, but will likely be a bigger part in the future. So that doesn't actually help Vega much as then it'll still be years before this stuff will be common to developers, but that's how they've had to go about their GPU design. Be forward facing and hope that developers make good use of it. If not, then they look worse (which is what has been happening; although plenty of that was other things not related to their GPU performance; which they've been working to recognize and improve across the board, time will tell how things work out).
I feel this is one of the issues that has led to the console contracts not working as some expected. Sony is nefarious for not sharing their techniques with 3rd party devs. That's why PhyreEngine was such a surprise to many, but that was Sony trying to get devs to work with the PS3 hardware that was already a nightmare.
PS4 is now the leading console and Sony's first party games are most likely using all these fancy features (I mean, Sony was one of the primary driving forces to how GCN came to be, if my memory serves me right) but Sony games never see the light of day outside of Sony consoles. MSFT was tasked with carrying DX12 into everyone's household and I'd wager their abysmal performance this generation tore giant holes into AMD's sails.
Would AMD be doing better if Xbox was selling in line with PS4? Possibly. But the thing is, that almost certainly would have come at the expense of PS4 sales, so unless they get a higher margin from Microsoft, it likely wouldn't make a big difference.
From what I've seen, Microsoft functions a lot like AMD in that they try to provide documentation, but they leave it up to the developers for the end results (I'd say they've been much more developer friendly than AMD, but AMD has had more of a focus on the hardware for obvious reasons; but Microsoft often has competing software so they're not going to just give away their unique advantages either; and plenty of game developers just use middleware anyway). They do offer tools but most of that isn't about eking the maximum performance is more about limiting outright problems. In the case of the Xboxes its likely mostly about CPU threading and managing the memory setup (the 360 and One both using eDRAM), and then baseline GPU stuff (didn't they just start talking seriously about DX12 on the One like 2 years ago? Meaning it very clearly was not designed with DX12 in mind, even though it can support a fair amount of its features). Scorpio definitely has a DX12 focus (it has hardware specifically for scheduling DX12 calls so that they can lessen CPU load), and Microsoft has been talking up their analysis tool, which both helped in them figuring out what they wanted in Scorpio (hardwarewise), and also showed them performance of the games already on the Xbox to get an idea of where they're bottlenecked. I would almost guarantee that will pay dividends.
Sony definitely is more interested in helping their own ecosystem. I believe they've had a few of their star developers help create tools and offer insight (although often it seems aimed at companies willing to do PS exclusives). Sony thinks their low level stuff on the PS4 is better than even Vulkan, but they are supporting it as well. I don't know if a developer could license DX12 for games on the PS4 or not.