Originally posted by: BenSkywalker
I don't see how die-shrunk current hardware would be a downgrade from 4 year old hardware?
Talking strictly about gaming, Cell still smacks around the i7 when fully utilized. Some games shipping now(UC2 as an example) already wouldn't be able to run on the i7(although if completely reworked you could get the same results, you just would have to place a lot more on the GPU).
You won't get perfect visuals with DirectX 11,12 or 13, and the next console will only be a step closer to realistic visuals.
In terms of visual improvements, DX11 doesn't give us anything over DX9 except the ability to do some things in a simpler fashion(IQ wise, nothing). Ignoring that all together though, the XBox is the only platform that is going to be running DX, the rest of the systems will have a custom API developed explicitly for their singular GPU.
Tesselation is going to be the next big step in graphics quality. No longer do organic or highly curved models have to look polygonal. It won't be like DX8 to DX9 or DX7 to DX8, but it does make things look noticeably nicer.
As for the cell, I think you give it way too much credit. It isn't robust enough for general purpose use. No OOE, Single precision optimization on the SPEs(which do just about all the work anyways, the PPE existing mainly to feed instructions to the SPEs), no branch predicting, etc.
It gives up way too mcuh to do what it is optimized to do which, by all means is a niche market, highly parallel single precision floating point calculations.
When I fiddled around with Cell back when it first came out, it was not a very good system. You are forced into manual control of the SPEs no matter what, High level languages(Perl, Python, Ruby) would always ignore the SPEs and only run on the single PPE. When I used it, the only language you could use that included control of the SPEs was C and that was all manual control.
In the end, the closest analogy I could get for the Cell was like using a F1 car as a daily driver. Great in a dick wagging contest, but truly, utterly shit when looked at from a general use, practical perspective. Yes, fully optimized, it's great, too bad you can't fully optimize it when trying to do general compute.
It's too big of an architecture change with too few benefits to gain a large portion of the general compute market and that is why it is ignored so much except in the PS3 and specialty applications.
I personally believe if you're going to do that, might as go all the way and just do GPU Compute. More performance gains for specialty applications and you're learning a new way to program anyways.