Why do I even bother? So all those times in the past where ATI cards or NV cards ran faster in DX7/8/9.1/10/11, it was all just a random fluke and nothing to do with architecture specific optimizations made by ATI/NV's driver teams.
No, you're right in this instance. The driver team
CAN optimize their drivers for games, the hardware and even the API. The drivers are the lowest level of software that interacts directly with the hardware, so the brunt of the optimization will take place there.
This is a direct contradiction from what you implied earlier, when you were talking about
developers optimizing for architectures. This is only true for low level APIs..
Just think, all of these recent PC ports like Dying Light, Dragon Age Inquisiton, AC Unity etcetera were all developed on consoles, both of which have AMD GPUs. So if your theory was correct, then the optimizations the developers made for the GCN based consoles should transfer to the PC right?
Wrong. As I said, DX11 is specifically designed as an abstractive layer to preclude developers from having to program for several architectures at once..
All those GE/GW titles where AMD/NV tried to get better performance for specific features like Global Illumination, SSAA via DirectCompute, HBAO+, PCSS+, all done in DX11 had nothing to do with catering to specific architectures of the said brands'? R9 290X pulling away from Kepler and closing in on Maxwell but Kepler falling further behind Maxwell simultaneously - all of that is just a random fluke and has nothing to do with drivers from AMD/NV?
Again, you're confusing your own argument that you presented earlier :whiste:
Your CPU argument is going nowhere because HardOCP has already tested 290X CF vs. 970 SLI and against 980 SLI for real world gameplay smoothness at 1440P and above.
I don't need to resort to HardOCP for their own subjective tests concerning smoothness. I've used SLI for years, so I have my own sentiments about that.
When SLI works as intended, it is buttery smooth and this is for 95% of games. Only a few odd games like Watch Dogs have broken SLI implementations..
Like I said even in AC Unity you had to run FXAA on 970 SLI, so the last thing anyone needs to worry about with 295X2/970SLI or faster is CPU bottlenecks in modern games at 1440P and above on a 4.4Ghz+ Core i5/i7 system. We even have users here with 980 SLI max overclocked who report almost 0% increase in performance going from 4.0Ghz 5960X to a 4.4Ghz 5960X in games. This is not some coincidence but because most games are GPU limited today.
Not necessarily. Clock speed is only one factor. The other is core scaling, and driver overhead.. AMD has issues scaling above two cores apparently, as several reviewers have found.
Thats why the GTX 980 with a 2600K is able to beat a R9 290x with a 5960x in Call of Duty Advanced Warfare..
It's remarkable you keep claiming CPU bottlenecks and AMD's horrible DX11 driver overhead but after countless games tested 290X is now faster than GTX970 at higher resolutions, and the same for 295X2 vs. 970 SLI.
Not really. If you test the GTX 970 at reference clocks perhaps, but who's running a GTX 970s at such low clock speeds? For aftermarket GTX 970s, they are either equal to, or faster than the R9 290x whilst drawing a lot less power.
As many have repeated, what is the purpose of your thread? Do you want people to help you pick the best 970/980 cards or do you intend to keep creating flame wars about NV/AMD in your thread? It doesn't seem like you are at all asking people to really help you pick fairly between 970 SLI / 980 SLI or a single 980 as a stop-gap. There should be NO discussion about AMD at all since everyone on our forum knows you are not interested in AMD products. So why do you keep talking about them in your own thread!?
Where did I mention AMD in my OP? Other people brought up AMD, not me. I'm merely replying..
The only AMD part I would consider buying is a 390x. But since thats not due out till Summer, I can't wait that long. Witcher 3 will be out this May, and there's no way I'm missing that game..