That may be true but AC Unity and Far Cry 4 will be a way bigger draw than any of the games AMD has bundled right now. 970/980 are winning on features and performance/watt and now game bundles. AMD needs to drop R9 290 to $249 MSRP and R9 290X to $279, which is what I said they should have priced them when 970/980 just launched. Now NV has all the momentum while AMD is just reactionary. AMD better have the most epic R9 300 series in the wings and get their mobile dGPU strategy in order. Maxwell is firing on all cylinders and NV hasn't even brought out any of GM200 derivatives nor GTX960/960Ti.
With HD7000 series at least AMD beat NV to launch by 6-9 months and there was a lot of demand due to mining that carried over to R9 200 series. Now that mining is dead, there is no secondary demand for AMD GPUs for non-gaming purposes. The biggest worry is that if R9 390/390X flop, GM200 flagship will go way past 780Ti's $699 price. 🙁
it was 2 months. The HD7970 didnt launch retail until january 9, it wasnt available until then. The 680 launched march 22. That is 2 months and 13 days. No where near 9 months. That is a huge massive stretch.
As for the actual chip, I am not even sure how late they really where. The gk104 was ready much earlier than that. The issue was nvidia was scared to jump first. They knew for some time that they would have to compete with the gk104 but had no idea where it would/could be slotted. It all depended on how well AMDs new architectures would perform. Nvidia new the gk104 turned out to be a semi potent chip, they even claimed it surpassed their expectations. But even still, their original hope was to at least be able to release it as a GTX670ti. They were not confident at all with this and really wanted to see where AMDs lineup would land. Once Tahiti benchmarks started coming out, you can bet folks at nvidia really did start to smile. This is because they knew that they could not only match the performance, they could actually edge it out. This is when they decided to finalize the clocks and go with the gtx 680 chip. They had to move pretty quick to make this happen in the 2 months since Tahiti launched.
Its interesting that people are bringing up efficiency lately, suggesting nvidia was the one that push it into an important metric with the launch of kepler. While it is true that Nvidia started marketing and pushing efficiency real hard starting with their kepler generation, this is not where it all started. Not at all. Actually, it started a few years back when AMD was the king in performance per watt. Nvidia's ambitious designs were power hogs and they caught a lot of heat from it (pun). When fermi launched there wasnt a thread anywhere discussing fermi without bringing up how much more efficient the 5800 series was. Most every review touched on how much power nvidia's flagship sucked.
This not only was a hot topic, it also cost them big time. They lost a huge chunk of mobile dGPU market share, which was probably the most concerning to nvidia. This is a very important market, arguably the most important. The power hog image is something Nvidia desperately wanted to break. It was a major grip in one of their most important architectures and I think that nvidia takes these things very very seriously.
Nvidia got the gtx580 power consumption down somewhat but much of the damage was already done. You can bet that a lot of effort would be put into their next architectures to maximize performance per watt as much as possible.
So when kepler launched, performance per watt was something they took very very seriously. It was also an achievement they were very very proud of. Nvidia went through great efforts to ensure that their GPUs wouldnt have out of control power consumption. The boost feature and voltage lock, all counter measure to keep tight control. They were determined to change that image. Most people were angry with the voltage lock and thought it was in spite of overclocking when it was really more a lock towards vendors. These vendors try to one up one another and were allowing things to really really get out of control. See with fermi, some vendor models were able to make a really bad situation look absolutely terrible and nvidia took the bashing for it. Nvidia voltage lock was a countermeasure in their quest to improve efficiency.
Whats ever more interesting is that with maxwell, we can see it happening all over again. Vendor overclocked cards trying to one up one another by manipulations of the power limits. And just as before, Nvidia was catching all the heat for it. Nvidia, this time, was vocal about what was going on. But to some, they now have these few articles to point to and make claims that maxwell isnt efficient. There is a good chance that nvidia will start to lock down on this once again, all in the name of efficiency.
Their drive to improve efficiency started once things really got out of control. I dont blame them, it really was a place they needed to heavily focus. Their achievements was something nvidia was very proud of. But i think its something they probably should be proud of. Their achievement has paid off rather well for them. Especially in mobile, a place that they were significantly behind in just a few generations ago. With the gk104, nvidia took some extreme measures to keep the consumption down. Maxwell took some of these measure down to the transistor level. This new focus is one that isnt going away any time soon. As we can see with intel, its hard to stop once they started chasing that rabbit. The pay off has been very well. Probably a lot more than some people might think