Sony has already stated they are going to push 4K with the PS4. 1080p/3D is likely to be there baseline target, 'normal' 1080p will be low resolution.
Just like last gen was HD? Nearly all top-end games of last gen were significantly below 720p. What marketing says before release and what game devs end up using is a very different thing. Pixel quality is generally more important than pixel quantity.
Noone had 1080p 3D TVs at the start of this console generation, they still supported it and now it is used.
Well, technically PS2 supported 1080p. Just like back then, no game dev is going to actually use the high resolutions.
7850 has 153.6GB/sec bandwidth, without a CPU to share with.
Of which portions need to be used to copy buffers for communication with the cpu. Sharing a memory bus with a CPU is generally either slightly advantageous or a wash.
Xenon- 77GFLOPS(233GFLOPS for Cell)
XBox P3- 2.9GFLOPS(6.2GLOPS for EE in PS2)
Jaguar 4 core @4GHZ ~=128GFLOPS
If Jaguar has ten times the computational power that it will, it still wouldn't be a larger upgrade for the 720, it's a downgrade in computational throughput compared to Cell
GFLOPS is not, and never has been, a measure of the speed of a CPU. Also, Jaguar doesn't clock to 4GHz -- realistic projections are in the ~2GHz range, +- a little.
Talking about peak throughput on CPUs that even in the best optimized real cases get tiny fractions of that is pretty misleading. Jaguar is a freakishly efficient chip per clock to people who are used to programming on last gen consoles. On last gen, things like a single small if clause took a dozen cycles on average -- Jaguar has a 1-cycle CMOV.
When comparing Jaguar throughput to last gen, it's fair to divide the last gen by 5 or so for differences in efficiency.
(although clearly it has large advantages in extracting real world performance).
It's not simply about how easy extracting performance is. If you had a lot of branchy, data-dependent code (and you would, in games), the best you could do on last gen was pitifully slow. No matter how good you are or how much time you spent optimizing.
NV2A- 932MPixels/1.9GTexel 29.1M triangles/sec 80GFLOPS
Xenos- 4GPixels/8GTexel 500M triangles/sec 240GFLOPS
7770- 4.3GPixels/37GTexels 990M triangles/sec 1.28TFLOPS
7850- 27.52GPixels/55GTexels 1.6B triangles/sec 1.75TFLOPS
Almost everyone I have heard states that the goal is to place a 7770 in the APU for the 720.
Again, raw numbers don't tell the full story. Those Xenos FLOPS are (like the NV2A FLOPS) Vec4+1 flops. Or, you can get actual full throughput only on loads where you have 4 identical ops per pixel per cycle. For geometry, this is often close enough. For anything else, we are talking about 30% typical per-pixel utilization. In GCN, the flops are scalar. Typical per-pixel utilization? 100%.
(There's also drops in utilization because in both Xenos and GCN you have to handle pixels 16 at a time, but that should be equal for both of them.)
What's worse, that doesn't take into account the rather large *reduction* of FB bandwidth by dropping eDRAM.
It's not like anything could actually use the full FB bandwidth. It was mostly a marketing number, with the real usable bandwidth being roughly double the GDDR3 bus.
Do take in account that the GCN series is a lot more bandwidth-efficient than the Xenos. It has significant amount of caches at all levels, and uses then efficiently (first AMD gpu that stores texels compressed in caches).
If you to use the 7850 numbers, then we are at comparable overall upgrades(roughly equal on wins and losses) but pushing Fermi die sizes
I don't think you understand how small Jaguar is. The whole point is that you can bolt 4 to a 7850 and *not* be anywhere near fermi die sizes.
and requiring insanely expensive bandwidth.
The big advantage of DDR4 is that it will be really cheap. To this date, consoles have always used expensive boutique ram. DDR4 is exciting because it will significantly cheaper per chip than what consoles are used to. This gen, the only real limit on the width of the RAM bus is that it places a lower bound on the possible size of the chip, not cost.
Cell is 115mm squared at 45nm, not sure where you got your information from, but if an eight core processor at 115mm on a 45nm build process is what you consider bad, I would *love* to see what you consider good.
Cell is not even close to what I'd call a 8 core processor. If you try to treat it as one, the single DMA system will grind it to a halt. If you want to compare Cell with it's peers, go have a look at the Tilera thingies -- they are roughly comparable and you can now get 8 of their cores in something like quarter of that space.
Because PC parts hit a power wall, the overarching theme of the past few generations has been utilization of resources as opposed to having as much of them as possible. The flops per CU has dropped in the last two new radeon generations -- while the real power per CU has gone up.
The last gen consoles were in basically all ways the opposite of this -- they packed a lot of raw punch, but now matter how much you tried, there's no way you could ever use most of it. This makes the numbers look bleak on paper, but to make accurate comparisons you have to take in account that a next-gen FLOP will, whether it's in the GPU or CPU, be worth several times what a last-gen FLOP was.