Then why not simply ask the developers to go and get whatever recent x86 CPU and GFX card they like, if it's not going to match the actual PS4 hardware in the end anyway. Why bother sending them a special "PS4 development PC"?
The software shipped in the box is an early version of what the final system will have, and is likely only made to work on that single system. Also, even if the final performance is not anywhere near the current devkit, several details of it may be very similar to the final platform. Things like how the memory bus is shared between GPU and CPU (Can you just pass pointers? Can they even be virtual pointers or do you have to translate them?) actually affect early engine development more than the raw performance numbers.
If you go by the same rumourmill crap and "insider sources" and "already shipped".
Then it seems Xbox720 will be Intel and nVidia.
I know from a source I consider very reliable that the earliest xbox next devkit was indeed Intel + nvidia. The later ones aren't. As with last gen, the earliest dev kits are really just the platforms the OS/API devs happened to work on to build their stuff. As the release date grows nearer, the devkits become more similar to the final hardware.
I believe they ship with extra ram because they have to run the development software on it.
This would be correct. Dev kits almost always have more ram than the final machines. The final dev kits (ie, not this one yet) this time around might be exceptions to this rule, as all the sensible memory tech available is point-to-point, and so when they move to the final silicon, there is no simple cheap way to ship more memory in the system.
BTW can an x86 CPU be still be considered x86 if you rip out the x86 decoder?
If a tree falls in the forest...
Well it still needs a some kind of decoder. What would you put in it's place?
It would obviously not be directly compatible with any x86 software, but some x86 features, such as strict memory ordering, would still be visible.
8GB would be nice, but I think even 4GB would do the trick provided it's hellishly fast ram. A10 is incredibly sensitive to memory speed in desktop benches, with DDR3-2133 making a monster of an improvement over 1066 and 1333.
If they managed to get something north of 2133 on a double-wide bus it'd be pretty solid.
Right now, the speculation is divided between DDR4 and GDDR5. DDR3 will be on it's way out when these are released, and would be expensive to supply for the latter half of the lifetime of the consoles.
Well Sony did use XDR in the PS3, if they used XDR2 in the PS4 it would give insane performance. If they use XDR2 I would live with 4GB of ram.
XDR2 does not even exist outside of paper. Also, as I understand it this gen Sony got hurt more than a little for using boutique ram that had almost no other users. In chip manufacture getting volumes up is the number one rule for keeping costs down.
They were both very active participants in the DDR4 spec, which I expect to mean that that's what they'll use. GDDR5 would give more bandwidth, but would be considerably more expensive. (Especially in the long term, as PCs can be expected to use DDR4 for a very long time, whereas GDDR5 will probably be replaced with something in GPUs in a few years.)
The Xenos GPU in the xbox is based on R600, the first of the unified VLIW5 series. Nothing like X800/1800
No, Xenos is vec4 + scalar, which is not same as VLIW5. It would be correct to say that R600 was further development based on Xenos.
Considering AMD counts GCN cores a bit differently than VLIW cores, a 7770 style GPU at 1Ghz or slightly less would outperform the Xbox 360 GPU by at least 6x.
Actually, by much more than that. The vec4 is only useful if there are four independent identical operations you can do on the same fragment/vertex. Which there are for vertex processing and other geometry, but generally aren't for pixel processing. vec4+s is only as good as 2 or 3 scalar operations for most pixel processing, and pixel processing is (way) more than half of all shader processing.