So, put me down as skeptical, but I doubt that either MS or Sony uses *purely* off the shelf hardware.
AMD is likely going to create custom silicon that incorporates custom DRM for both companies (you know, to prevent piracy directly to PCs - how cool would a "hackinPS4" be?), and knows that tens of millions units will ship?
Furthermore, who is paying to shrink VLIW - unless it is in an APU? We've already mentioned a) how companies want cheap hardware and b) we know getting 28nm spun up is not a trivial task, so if MS/Sony want small and power efficient chips it seems that they'll either be using a modified Trinity (more cores? more memory channels? XDR support? more GPU cores?) or a PD/GCN discrete combination or custom APU that incorporates GCN.
Further more, it makes way, way, way too much sense to implement a APU in a console and share the memory - especially if there is some compute being performed on the GPU. I would think 1GB is a realistic amount of memory for them to use, especially if they use something custom or exotic, 2GB of combined RAM is my upper boundary expectation.
It would seem that AMD would really want GCN adoption, as it would allow them to really build out the compute infrastructure in a cohesive way across many different platforms.
As to tesselation, it would likely become a pretty standard game feature, even if it wasn't cranked to insane levels and applied to every concrete surface in the game. This is still good for PC game visual fidelity.
I hope that we will some amount of SLC flash (4GB?) for I/O caching. With full control of the hardware, this could do wonders for loading time, etc. especially with modern CPUs that can handle decryption much more quickly - which is one of the "features" of moving to less geriatric CPUs that seams to be much overlooked. What did that BRAID dev say? That levels that loaded nearly instantly on his dev kit took something like 30 seconds once MS did all their encryption? Something like that...
Anyway, there is a lot of room for the next gen consoles to be a lot better - and on the cheap. Consoles gamers, on the whole, are going to be pretty excited about 1080p 30 FPS and 720P 60 FPS (with the odd 1080p 60 FPS title thrown in there) and holy hell, the textures a 512/768/1024MB frame buffer can provide! w00t!

Honestly, when I think about gaming on my 360/1080p Projector/90" screen those improvements even make me a little excited about a new console. Which is a little sad.
Another "cost cutting" feature not really mentioned so far is that by using more generic hardware with more traditional memory coherency and such, building and optimizing game engines and assets is going to require less specialized knowledge and time (an easy way to blow lots of $$$). This is a "win" for devs as well.
@ Pelov - if they are putting together consoles starting a year from now, is 40nm still going to be the go to process? Hmm... I suppose those are really going to be some cheap parts, but they aren't going to get APUs on the process. I still think that APUs are going to be the real engines of the next consoles, mainly so that one pool of memory can be used for all console purposes (much of which is no longer gaming in the traditional sense...)
Or are they going to have a APU (where the GPU is available for Compute/CF) during "gaming" and then a discrete GPU that is then powered off during other periods? That would make some sense to me, I guess... although it would seem that the Trinity APU is going to be as powerful as a 6670, especially if they did something like add a EDRAM chip or moved to a much wider (256 bit? Higher?) memory interface for the APU.