I understand you guys think I'm drinking the Kool-Aid here, that I'm hopped up on Uncle Bob's krazy juice, but I'll keep trying to make my point because based on the replies I don't think you guys are really seeing my point yet.
(but I admit that even if/when you do see my point, you may still deride me as hitting the bottle too much
Because the ammount of chips from GF that could withstand the normal validation of 5ghz 220w chip - was miniscule?
While possible, I see you are getting hung up on the clockspeed binning rather than the specific TDP tier.
It is true that of course binning is a factor of power consumption.
But ignore the clockspeed and just look at the power consumption bin.
What kept AMD from opening up the binning window for existing 8150 bulldozer chips from day-one to include all chips that consumed enough power as to require a TDP not of 140W but of 220W?
It is true that with the 9590, AMD used the extra TDP headroom to also boost clockspeeds in their binning process.
But something prevented them from opening up that TDP tier earlier, even with existing clockspeed bins, even though it would have meant higher yields for them.
Because they realized "worlds first 8 core" ploy failed - and now tried the old ghz race?
Actually if that were there conclusion then they would not have pushed the 9590 and the 9370 as 8-cores.
They could have cut the core-count to 6cores or 4cores, and kept the 4.7/5GHz bins and kept the TDP at a more conventional status-quo upper limit.
Allowing for higher power consumption while binning has always been an option, there is a reason the bins turn out to exist at the values they do.
95W instead of 100W or 90W for the 2600K for example.
Creating a 220W bin has always been an option, technically, but there is a reason no one has pushed the envelope that far until now. Same thing with GPUs.
Go back as far as you like prior to the first 330W GPU and any prior GPU product you want to point at could have been a 330W GPU as well...if only the supplier chain could have supported it in an economically viable fashion at that earlier timeframe.
It heralds progress - but we're not arguing that IDC.
If the progress is only consumption within a given a market well - that's fail?
We're discussing the viability of the 9590 as a product - and it fails on every metric except "Well it consume alot of power!".
I don't understand your logic and I'll tell you why - it isn't self-consistent.
Your stated position on why a 220W TDP 9590 SKU is "fail" could be equally applied to literally every other CPU that both AMD and Intel offer by simply ratcheting down the threshold of what you personally want to consider "unacceptably high power consumption".
That 150W hex-core $1020 extreme 3970X CPU from Intel? Total fail. What does it offer that a $300 77W quad-core 3770K doesn't offer?
That $300 77W quad-core 3770K from Intel? Total fail. What does it offer that a $200 77W quad-core 3570K doesn't offer?
That $200 77W quad-core 3570K from Intel? Total fail. What does it offer that a $130 55W dual-core i3-3220 from Intel doesn't offer?
You see how this works, pick any arbitrary TDP value and you'll find one lower than it - lower in price, lower in TDP, and lower in performance - and yet all those products exist because there is a spectrum to the demand of the market.
So how is going the other direction suddenly a total fail? How is a 130W or 150W Intel chip a fail over the lower-performing, lower-costing, lower TDP SKUs offered by Intel?
And by extension, how is a 9590 a total fail compared to the lower-performing, lower-costing, lower TDP SKUs offered by AMD?
I think i'm like "swooooosh" to what your saying.
Because AMD did not - and i'll gladly be quoted geusstimating this - NOT release a SKU just to PUSH the POWER envelope.
Of course they didn't. Nor did Intel when they released their 150W TDP 3970X which was literally nothing more than a 0.2GHz speed-bump up from the existing 130W TDP 3960X.
I remember when Intel was about to release that chip, I had a little bird telling me about it in advance and the ground-breaking (in their industry) efforts they were going through to make sure turn-key 3rd party cooling solutions on the market would be able to reliably and economically handle the heat dissipation requirements.
Internally it took a LOT of engineering time and effort to validate the viability of a product that merely pushed the existing envelope from 130W to 150W.
AMD made no less the effort to ensure their 220W chips would be viable, that the infrastructure was in place to enable such products to work correctly and without serious failure (ala the 1.13GHz P3 situation) should the cooling solutions not be viable (considering the cost-cutting measures AMD knows cooling suppliers are under constant pressure to take)
You can't seriously be saying that?
And you can't seriously be saying that "It's kewl for pushing the power envelope - but doesn't really offer more performance than other SKUs with LESS TDP on the market"?
I do think it is cool. I think the entire effort and outcome is cool.
While all you see at this time is a singular product with two SKUs to have come of it, I see the initial footsteps, the required first-steps, to what will likely become a whole new high-performance high-power tier of products no different than what happened in the GPU space once the AIBs saw the world did not implode into a blackhole just because they violated the PCI-E spec and created a whole new tier of >300W TDP GPU products.
AMD has laid the groundwork, the first steps have been taken. I am not specifically excited by the specific products that those first steps embody - I am not excited by the 9590 itself, but I am excited to see what round 2 and round 3 bring.
I am excited for what the future may bring thanks to this new direction.