We havent exactly seen any Xbox360 game run better on AMD GPUs than nVidia. Or the other way around with PS3.
If they are APUs (and not necessarily all AMD), it may lead to more game developers making use of APIs like OpenCL for non-graphics tasks. I presume that there will be a unified address space and a single shared memory buffer, similar to the 360's memory layout- experience has shown that game developers find it much easier to extract performance from that sort of layout, compared to the PS3 style of lots of separate, explicitly managed memory buffers. GCN working in x86 address space, with good computational power, and no copying overhead between address spaces would make it much, much easier for GPGPU calculations. And if that becomes common in the next gen console games, we can expect similar ramifications for the desktop. It should be good for both AMD and Intel's APUs, but I wonder how well it will map to discrete graphics cards. If algorithms are written with the expectation of no copying overhead, having to deal with the PCIe bottleneck might slow things down a lot.
Plus I doubt that statement got anything to do with PS4 or Xbox720. Those consoles are also still too far out in the future.
PS4 and 720 should be out next year. These chips should be ready far, far in advance of launch.
As an aside, I'm reading
Race For A New Gaming Machine at the moment, it's a very interesting book. (It's written by the chief architect of the PowerPC core at the heart of the Cell, and the Xenon.) His writing style is a bit tortured in places, but it's a fascinating story for someone like us- the way that Sony, Toshiba and IBM had this grand vision for the Cell, a chip which would revolutionise microprocessors and power the next generation of consumer electronics, PCs and game consoles. Development started in 2001, five years before the PS3 launched- that's how long a lead time there is on these things.