That's all you been doing wrt to AC for the last week, isn't it?
What I have been doing is trying to get to the truth. Despite what the Oxide dev stated, I had a hard time believing that Maxwell 2 lacked the capability to do asynchronous compute.
Looks like my skepticism may have paid off..
Ya, so more evidence that you think GPUs should focus on maximizing the short-term performance, not worry about any next generation games, any next generation VRAM requirements. Again then why do you care so much about how Maxwell will perform in DX12 or its ACE functionality? It contradicts your statements that you don't think GPUs should be forward-thinking in their design.
There's nothing wrong with being forward thinking. When it becomes a problem, is if you develop features which are intended solely for next gen use
four or five years down the road, but those features take up die space and are essentially inert.
That's too big of a time gap for those features to be useless if you ask me.
This has been covered years ago -- AMD cannot afford to spend billions of dollars to redesign brand new GPU architectures like NV can given AMD's financial position and also the fact that their R&D has to finance CPUs and APUs. NV can literally funnel 90%+ of their R&D into graphics ONLY.
And I understand that, but it still doesn't change the fact that this played a major part in AMD losing a lot of market share to NVidia. Although many people buy GPUs with the intent to keep them for 3 years or more, it
DOES NOT MATTER if NVidia keeps beating AMD in the initial phase, which is the most essential phase because that's what affects public perception the most.
After Kepler was released, nobody but the industry insiders could have known that it's performance would diminish so rapidly because of its meager compute capabilities, whilst GCN's would rise as developers started to use compute shaders more and more in their engines..
But by then it was too late anyway.
Therefore, AMD needed to design a GPU architecture that was flexible and forward looking when they were replacing VLIW. That's why GCN was designed from the start to be that way. When HD7970 launched, all of that was covered in great detail. Back then I still remember you had GTX580 SLI and you upgraded to GTX770 4GB SLI. In the same period, HD7970 CF destroyed 580s and kept up with 770s but NV had to spend a lot of $ on Kepler. Then NV moved to Maxwell and you got 970 SLI and then 980SLI but AMD simply enlarged HD7970 with key changes into R9 290X.
I already addressed this above. As long as NVidia is able to dominate AMD in the initial phase of any new product release, then AMD has no chance. It's the initial phase which creates the lasting impression.
By the way, are you keeping tabs on my hardware changes? :sneaky:
But that's why I keep asking, why do you in particular care about DX12 and AC? It's not as if you'll buy an AMD GPU and it's not as if you won't upgrade to 8GB+ HBM2 Pascal cards when they are out. Therefore, for you specifically, I am not seeing how it even matters and yet you seem to have a lot of interest in defending Maxwell's AC implementation, much like to this day you defend Fermi's and Kepler's poor DirectCompute performance. That's why it somewhat comes off like PR damage control for NV or something along those lines. Since you will have upgraded your 980s to Pascal anyway, who cares if 980 hypothetically loses to a 390X/Fury in DX12? Doesn't matter to you.
Well I'm partial to NVidia, but I also care about truth. NVidia has gotten an unfair shake on the internet lately because of irresponsible remarks by a certain developer, and a certain ex ATi employee with an agenda.
But I don't know what you're talking about when you say I defend Fermi's and Kepler's poor DirectCompute performance. Fermi had very good compute performance, Kepler's was probably below average for GK104, and above average for GK110.
But by the time compute shaders really became an industry trend, NVidia had released Maxwell which has very strong compute performance. So once again, NVidia was on time when it comes to anticipating industry trends.
Even when AMD had HD4000-7000 series and had massive leads in nearly every metric vs. NV, AMD's GPU division was hardly gaining market share, and in rare cases where they did market share (HD5850/5870 6 months period), it was a loss leader strategy long term with low prices and frankly by the end of the Fermi generation NV gained market share. In other words, NONE of AMD's previous price/performance strategies worked to make $. Having 50-60% market share and making $0 or losing $ is akin to having 50-60% of "empty market share." In business terms, that's basically worthless market share. It's like Android having almost 90% market share worldwide by Apple makes 90% of the profits.
Yep, AMD has historically been subjected to bad management year after year.
1) AMD implemented an optimization in the drivers to vary the tessellation factor since the performance hit was much greater on AMD's hardware that cannot handle excessive tessellation factors;
Well this might have been enabled in the HardOCP review then if it's on by default in the drivers and inflated the frame rates for AMD.
2) Actual user experience. I trust that far more than any review HardOCP does.
I've heard people accuse HardOCP of being biased towards NVidia or AMD, usually depending on the results of a particular review :sneaky:
Good thing there are objective professional sites we can rely upon to tell us the truth:
Techspot tested the Witcher 3 shortly after release. The HardOCP review that I posted was done months after release, with many more optimizations for GW so the two aren't really comparable.
TressFX seems far more efficient than HairWorks as well (or alternatively it doesn't use worthless tessellation factors to kill performance).
TressFX is terrible compared to hairworks. In Tomb Raider, Lara looks like she's underwater or in outer space as her hair seems to defy gravity, and she's the only entity which uses it.
In the Witcher 3 on the other hand, there are multiple entities with HW enabled which look far superior to when it is disabled. Geralt himself is probably the worst example of HW in the game if you ask me.
But other creatures like wolves, horses, monsters etcetera look way better with HW enabled.