You guys keep repeating that. Could someone give some actual comparisons? It's hard to get anything good from reviews because no one is using the same resolutions or settings that Anand did, but I'm sure something can be approximated. I'm curious about working out theoretical capabilities here, more than making statements about who has good graphics and who doesn't.
Or if you want to look at 3DMark 11 vs games you could look at it this way; in AT's review the Iris 5200 Pro part does about the same as 650M in 3DMark 11 and performs comparatively with 650M in the following benchmarks (for the 55W point, GT650M vs 5200):
Metro Last Light: 1.05x, 1.21x
BioShock Infinite: 1.32x, 1.38x
Sleeping Dogs: 1.07x, 1.36x
Tomb Raider: 1.15x, 1.32x
Battlefield 3: 1.11x, 1.54x, 1.47x
Crysis 3: 1.30x, 1.34x, 1.33x
Crysis Warhead: 0.89x, 0.99x, 1.09x
GRID 2: 1.18x, 1.53x, 0.92x
Things are really all over the place depending on quality, Intel looks held back by AA and AF performance. That could be due to having worse effective bandwidth (even with Crystalwell its peak bandwidth is behind the GT650M tested, and this is assuming it can simultaneously saturate the L4 + main memory which is not normal for something that's supposed to operate as a cache). That's something that's not necessarily an intrinsic problem of the uarch and would have different constraints in a console design.
The average for the lower quality results in games is 1.13x, while for higher quality (or whatever shows 650M best) it's 1.347x.
If the TDP is > 100W you'd probably have some extra GPU clock headroom even after doubling the GPU resources on GT3e, particularly if we're talking strictly dual core CPU here. So they could probably reacher higher than 2x performance of GT3e with a 2x GPU.
So yeah I don't think that 75% 7790 for > 2x GT3e estimate I gave is really that crazy. Whether or not the bigger performance deltas would push it much below that and are an intractable problem I really don't know.