Yes, they did. Check out
this May 2005 article from Tom's Hardware. Note the last paragraph.
I disagree, it really
is a steaming pile of crap. For the vast majority of desktop applications, a simple die-shrink of Thuban to 32nm would have outperformed FX-8150. Heck, the FX-8150 fell behind the actually existing Phenom II X6 1100T in many benchmarks. When you can't even beat your own previous product, that's hard to describe as anything but a failure.
Incremental improvements to the K10 architecture would have been far, far cheaper than Bulldozer to develop, and would almost certainly have had better results. AMD bet everything on high integer throughput, hoping to win the server market, and lost big.
Not if it's just a CPU. But for total system TDP of a PC with good graphics capabilities, 300W is not at all unreasonable. Look at the Mac Pro, with a big Intel Xeon (4 to 12 cores) and
two Tahiti GPUs - all cooled by a single large heatsink and fan. That has to be over 400W when fully loaded. And OEMs had no compunction about shipping GK110-based cards with TDPs of 200-250W. Add in the CPU and you're easily up to 300 or more. The fact that all the heat in a 300W APU would be coming from one chip makes the design of the cooling system easier, not harder.
Again, this will only be palatable if the performance is really competitive, which will require 16nm FinFET plus a good performance from the Zen CPU architecture plus HBM shared memory.