There's a variety of reasons, but most of them are because AMD hasn't put forth the resources to get their software backend to be competitive with Nvidia's. I don't think even the 4 vs 6 geometry issue would be a big deal were it not for that. But Nvidia has much better geometry culling on top of that (which is why AMD takes a big hit from Tessellation), which means they're cutting out more geometry (that isn't visible so it serves no purpose at the time of it being rendered) before it even hits their geometry processors (so AMD is bogged down even more than the simple 4 vs 6 would indicate). And the ROP situation is an issue for AMD, they routinely have like 2/3 the number of ROPs as the Nvidia cards they're competing with (or hinders them so they end up competing with similar ROP count, see Vega 64 vs GTX1080 where the former has substantial advantages in a lot of areas, but ends up competing with the 1080; they have same ROP count). And then depending on how those are clocked it could be an even bigger difference (Nvidia has had a relative clock speed advantage for awhile now, on top of having a wider geometry and higher ROP count).
AMD has claimed that ROPs aren't a bottleneck for them, but few people are buying it. They also claimed that there isn't a hard limit to the ratio with GCN, but then admitted it would take a lot of engineering to change from the ratio they had settled on. And big surprise, it kept being an issue once Nvidia started regularly offering more. They were allegedly working on things that would get around some of their limitations (were working on an advanced culling system called the NGG Fastpath, was supposed to show up with Vega, but got punted to Navi at the earliest; which Navi also likely has basic changes to the geometry - there's patents that indicate them going from 4 base to 6 for instance, so per SP they should be getting 50% more geometry throughput, we'll see if they can improve things beyond that).
Other areas that Nvidia objectively has advantage is in color compression. I'd guess texture compression as well, but not sure on that. And while AMD allegedly has "Draw Stream Binning Rasterizer" and other features that they claim are already working on Vega, I'm skeptical. If they are, then its another area where AMD's backend is behind Nvidia's. There's been some debate about tile based rendering, with Nvidia likely implementing it with Maxwell (which is why it saw big gains even thought it was on the same process as Kepler). AMD has claimed they support it but it doesn't seem to be nearly as good as Nvidia's implementation.
I think their hardware would've been ok, but AMD struggled to get developers to properly utilize their hardware outside of consoles. And that rarely translated to the PC space for whatever reason (although we did get some glimpses here and there, look at Doom Vulkan for instance). And Vega was I believe extra compute heavy, so it consumed more power for graphics use than it would likely ever be able to realize, and it was going up against an exceptional rasterization architecture from Nvidia (Pascal).
Supposedly AMD recognizes (or recognized) this issue and have been pumping resources into it. It was also why AMD has pivoted towards an open source focused software development situation. That's good and bad. Good in that it opens things up and is what developers want, but bad in that it takes time for that development to provide tangible benefits (its a long term thing). But AMD has been working to bolster it some themselves. Their Linux (open source) support seems to have massively improved starting with Radeon VII/Vega 20.
Raja, the former head of AMD's graphics division was pushing for overhauling the software backend and working to improve it (sounds like he was trying to get AMD more like Nvidia, who has had robust software support), but then Vega came out and underperformed, and then he left. Which AMD had been deliberately limiting and focusing resources of their GPU division for years as they were trying to get their CPU division back on track. So the few resources the GPU division had often got pushed to their semi-custom business (that was doing the game consoles), and allegedly they chose to push resources towards developing Navi instead of getting Vega working like it was supposed to. I personally think that was a good idea, as Vega was already out and so it'd be better to try and come out swinging harder with your next stuff than to try and fix older stuff (the instances where AMD got improvements out of their past hardware, sometimes quite significant, didn't really help them in sales, and Vega was considered a failure by most people at least in dGPU gaming space, so it'd be better to develop things and try to make future products look better from the outset than to try and salvage Vega). We'll see how that turns out. I'm expecting improvements, but not expecting miracles. AMD needs sustained development, and need to execute products better (even when they have good products they often do something to shoot themselves in the foot, like the PCI-e issue with Polaris at launch).
Which Vega was a good chip for some markets (it really isn't a horrible gaming chip, just hot and power hungry). Its an absolute monster for some compute tasks. But I think its inherently flawed. And then was made worse because it was so tied to the expensive HBM technology that meant that it was limited in its ability to be priced more competitively. And some other blunders (AMD has a serious issue in that they seem to do little to no binning/testing of their GPUs, so they tend to run far too high of voltages, which causes extra power draw and extra waste heat produced for no reason, and it even actually causes lower performance - especially in sustained situations; so they end up making their products use more power, put out more heat which causes them to be noisier or have to resort to all-in-one liquid coolers which add even more cost, and it causes their performance to be worse than it otherwise should be; you often can lower your voltages by 100mV and get significantly less power draw, lower waste heat, and performance especially in sustained situations ends up better as the GPU doesn't throttle as much).
So simplest answer: AMD, largely due to the disparity in software/backend, doesn't utilize their hardware as well as Nvidia.