I love AMD and support them buts its painful to watch them go around blundering into walls.
Bulldozer was a mess. Piledriver is not much better.
http://techreport.com/r.x/amd-fx-8350/x264-power-task-energy.gif
http://media.bestofmicro.com/Y/0/357624/original/energy used.png
We are barely better than a phenom x6 in terms of efficiency here. Going from 45 nm to 32 nm (2nd revision) should have seen a substantial efficiency gain. We got a little more efficient and the a series is less efficient that the phenom x6 or the phenom x4 (non igp). AMD would probably have done better from a perf watt here just shrinking the chips and making a few improvements. So much R&D for this. Bulldozer should have been canned from the beginning and amd started from scratch on something that would work. Its like they deliberately launched a P4. And at the price that Bulldozer launched at?
GCN is great and I would have bought a laptop with a 7870m in a heartbeat but they could not be found. The only two chips with gcn that you really see (disregarding solar system) is the 7730m (fairly weak--weaker than the 640m) and the 7970m (which is decidedly not mainstream). Now amd is launching solar system a year late and even the high end cards (the 384 shader models not the 88xx series) are generally weaker than or equal to the 650m (which is being replaced by the much more powerful 750m). The 8870m or 7870m were perfect for laptops, using less power for impressive performance (which amd again didn't execute well as the notebookcheck reviews of the samsung series 7 chronos (15 and 17 inch) show the 8870m doing generally worse than the 660m despite much better performance in synthetics (3dmark, unigine)). Only now are we really seeing gcn in laptops. And where is the mid-high end? We go from 640 shaders to 1280?
AMD somehow lost the momentum with the 7000 series. The 6770m was the best mainstream (and most common of their chips) gpu last generation, getting much better performance than the 540m (which most mainstream nvidia laptops had) in a similar thermal envelope. This generation nvidia launched the 650m and amd have nothing that really competes in that segment of the market, losing tons of market share. The 7670m is WORSE than the 6770m, running at a lower core clock with DDR3 vs GDDR5. WHY AMD? What happened?
AMD failed to put their money where it needed to go. The 7970m got a tremendously bad rap among gamers because of enduro issues which took almost a year to fix (initially the problem was so bad that in certain games the 7970m performed worse than the 6990m). Enduro still has problems with determining which gpu to use (not really a problem for anyone who knows how to go to the catalyst control panel and set the game exe to use the 'high performance' option but a problem for the average joe who just wants to play games). AMD should have no problem doing this. They have substantial experience dealing with both GPUs AND CPUs (which nvidia does not) and should be able to do more with graphics switching than nvidia. Buying ATI years ago they should have been able to see this and should have been out with this technology first. This driver failure continues with hybrid crossfire (which is even worse than regular crossfire [which isn't as bad as all the review sites are making it out to be, just stick vsync on and half the problems are gone; or frame limited]) which often is WORSE than just using the discrete card. WORSE THAN THE DISCRETE card sometimes. Even when its better the effect is marginal and the stuttering is horrendous (asymetrical crossfire between VLIW4 7660G and VLIW5 7670m with different shader counts is difficult to do--hopefully richland and solar system with the same number of shaders can fix this).
Hybrid crossfire was a great idea but they did not throw enough resources at the problem to get a good result. The other problem for amd is pricing. When you can get an i5 + 730m system (730m is almost as good as a 650m with 8GB ram and 750 GB HDD) for $630 a lot of trinity becomes irrelevant. This makes it *really* hard to recommend a trinity system to someone on a $600 budget (for $30 more they get a tremendous increase).
http://www.amazon.com/Acer-Aspire-V3...cer+v3+i5+730m
I don't hate AMD but its really painful to watch them. They are actually doing quite a good job against a giant like intel the problem is that they need to do a better job. Asking for that better job though, is really asking a lot given their size and resources.
Intel, however, is guilty of nickle and diming their customers (socket changes, selectively disabled features, etc). However, they can get away with it simply because they hold the performance crown. You can't deny that intel has been relentless in improving efficiency and their igp (which is quite amazing considering they have had to develop all the IP on their own). Besides price (and nickle and diming and naming games) its harder to criticize intel. They haven't made any real blunders that have really hurt them in the past couple years (though they get -10 points for having a haswell chipset issue two years after they had an issues with sandy bridge sata). Their execution has been much better and much more consistent over the past 5 years or so. They have been giving AMD the death of a thousand cuts (improving igp on mobile, power efficiency), slowing and relentlessly overpowering AMD.
People are going to complain that intel is not pushing performance (and that they suck because they are not getting 15% performance improvement on haswell over ivy). This is very true and while I agree that its bad you have to look at it from a non-consumer perspective. Intel simply can't improve performance by leaps and bounds over AMD because then AMD would go out of business and they would become a monopoly. What Intel has to do is simply wait for AMD to catch up. Also from an economic perspective its not worth it for intel to spend a ton of money pushing performance and taking another 5% marketshare from AMD if the cost of that R&D is more than the profits from the increased marketshare. Intel is playing it smart right now. They are pushing efficiency because their biggest threat is ARM (cortex A-15 is uncomfortably close to the low end in performance and much better in terms of eficiency--see the 3dmark for android on this website) and mobile is the future, smaller profit margins or not. This also allows them to, if AMD pulls a core 2 duo miracle move, simply add more cores in their chips and retake the crown.