Wii U CPU:
- Tri Core IBM 45nm PowerPC750CL/G3/Power7 Hybrid
- Core 0; 512KB Core 1; 2MB Core 2; 512KB L2 Cache
- Clocked at 1.24Ghz
- 4 stage pipeline/not Xenon-Cell CPU-GPU hybrid
- Produced at IBM advanced CMOS fabrication facility
- eDRAM is L2 cache embedded in CPU(Power7 memory implementation)
*Wii CPU core was 20% slower than Xbox 360 core, since Wii U CPU is modified/ehanced and clocked 65-70 percent higher thus two Wii U cores should be on par/equal exceed all three Xbox 360 cores or if all three cores are used then Wii U CPU is 50+ percent stronger/faster than Xbox 360 processor and faster than PlayStation 3 processor. http://gbatemp.net/threads/retroarch...7#post-4365165
*X360 Xenon: 1879.630 DMIPS*3 = 5638.90 DMIPS @ 3.2 GHz vs Wii U Espresso: 2877.32 DMIPS*3 = 8631.94 DMIPS @ 1.24 GHz
*X360 Xenon 32-40 stage pipeline vs Xbox One/PlayStation 4 Jaguar 16 stage pipeline vs Wii U Espresso 4 stage pipeline
*X360 Xenon in-order 3 Execution Units vs Xbox One/PlayStation 4 Jaguar out-of-order ? execution units vs Wii U Espresso out-of-order 15 execution units
*X360 Xenon 1MB shared L2 Cache vs Xbox One/PlayStation 4 4MB shared L2 cache vs Wii U Espresso 2MB/512KB/512KB L2 cache
Wii U CPU is next generation compared to Xbox 360 and people that say otherwise should just shut up, be ashamed, have a seat, accept the facts and deal with it and here is the proof; http://systemwars.com/forums/index.php/topic/112794-no-freaking-way-40-stage-pipeline/
*Wii U CPU codenamed Espresso could actually be three to four times faster than Xbox 360 Xenon since Xenon has less cache, way less execution units and way too long stage pipelines that is 8 to 10 times longer and this is awful for a lot of tasks and it is in-order versus Espresso that is out-of-order thus better for predicting code, better for AI, AI path finding and so on...
- Dual Core ARM Cortex 8 for background OS tasks clocked at 1Ghz with 64KB L1 cache per core and 1MB of SRAM as L2 Cache, evolution of "Starlet" chip
- Single ARM9 "Starlet" core for Backward Compatibility, rumored to have higher clocks
Wii U Memory:
- DDR3 2GB, 1GB for OS, 1GB for Games
- eDRAM 32MB VRAM + 4MB Gamepad + 3MB CPU L2 cache
- Clocked at 550Mhz
- eDRAM act's as "Unified Pool of Memory" for CPU and GPU thus practically eliminating latency between them
*Wii U's DDR3 RAM bandwidth has theoretical maximum of 51,2GBps since it has four 512MB chips and not one large 2GB chip so anyone thinking that its maximum bandwidth is mere 12.8GBps is a tech illiterate. Xbox 360 had theoretical maximum of 22.8Gbps though it has a bottleneck that turns it down to mere 10Gbps thanks to poor FSB.
*Xbox 360 has GDDR3 RAM that is bottlenecked by Xenon's FSB so it can not saturate theoretical maximum of 22.8Gbps since FSB can handle 10Gbps and latency of GDDR3 is atrocious compared to DDR3 RAM in Wii U, latency is very important for the CPU since the lower the latency the faster transfer between CPU/GPU and RAM and PlayStation 4 will have similar issues as Xbox 360 when comes to latency since GDDR5 is successor to GDDR4 that is successor to GDDR3 and all of them have higher latency than DDR3.
Wii U GPU:
- VLIW 5/VLIW4 Radeon HD 5000/6000 40nm
- DirectX 11/Open GL 4.3/Open CL 1.2/Shader Model 5.0
- Supports GPGPU compute/offload
- Customized, Using custom Nintendo API codenamed GX2(GX1 Gamecube)
- Clocked at 550Mhz
- Produced at TSMC advanced CMOS 40nm
- Uses 36MB eDRAM as VRAM
- 4MB of eDRAM is allocated for streaming feed for gamepad
- GPU is customized and according to modification of GPU's in their previous consoles we can presume that Nintendo won't waste any mm^2 to unneeded features and customized to fit own needs.
*Eyefinity is present on AMD cards since Radeon HD 5000 so it is at least Radeon HD 5000 w/ Terascale 2
*If it is Radeon HD 4000 series and has 320SPU's then it is Radeon HD 4670 55nm though since Wii U uses Eyefinity and GPU is 40nm thus being Radeon HD 4000 series is invalid.
*Radeon HD 6000 was released in Q3 2010, final Wii U silicon was finished in Q2 2012 and released Wii U in Q4 2012. Looking at gap between E3 2011 and final Wii U silicon Q2 2012 plus Radeon HD 6000 evolved from Radeon HD 5000 that evolved from Radeon HD 4000 thus I presume switching to a newer though similar GPU and architecture was not a problem and all of these GPU's were produced at TSMC's fabs.
*Wii U from E3 2011 and its development kits had Radeon HD 4850, it is rumored that Wii U's newer development kit replaced 4850 with modified/customized/cut down Radeon HD 6850.
*Radeon HD 6850 has 1.5Tflops of performance at clock of 775Mhz with 1GB of GDDR5 VRAM and bandwidth of 130Gbps
*GPU in Wii U is clocked at exactly 30% lower at clock of 550Mhz and if it has 1/3 of SPU's thus it has 0.352Tflops. 36MB of eDRAM with 70-130/275/550Gbps bandwidth, 2-player co-op as for example in Black Ops 2 Zombie Mode is using Eyefinity(?) to stream two different in-game images/views.
*Since eDRAM in Wii U's GPU codenamed Latte is embedded thus its theoretical maximum bandwidth of 275Gbps to even 550Gbps; http://www.ign.com/boards/threads/official-wii-u-lobby-specs-graphics-power-thread.452775697/page-203#post-481621933
*90nm 16MB eDRAM can do 130Gbps bandwidth, 45n 32MB eDRAM in WIi U should do the same plus since CPU's Cache and GPU uses eDRAM so latency is much much lower and AI can be offloaded to GPU, when embedded into GPU then it should do 275/550Gbps
Wii U Note;
- Cant use DirectX 11 because of OS, only Open GL 4.3/Nintendo API GX2 that is ahead of DX11
- Easier to program, no multiple bottlenecks causing issues as on Xbox 360 and PlayStation 3
- Efficient hardware with no bandwidth bottlenecks
- Launch games and ports use 2 cores and only a couple of ports/games use 3rd core to a decent degree
- Wii U CPU has much higher operations per cycle/Mhz than Xbox 360/PlayStation 3, it is unknown if it is faster
- Wii's CPU core was slower
- Minor Power7 architecture implementation involving memory allows shave off 10 or more percent of texture size without loss of quality(special compression or something)
- Wii U power consumption at full system load is 30-35 watts(highest load ever possible in its current state)
- Wii U's PSU/power brick is rated 75 watt and has 90% efficiency thus it can handle 67.5 watts
- Wii U's Flash storage, RAM, USB ports, motherboard, fan and small secondary chips consume around 5 to 10 watts in total when fully stressed/used
- Wii U's SoC(CPU and GPU) estimated maximum power consumption is 20 to 25 watts
- Supports 4k displays, native 4k via HDMI 1.4b (possible 2D 4k games?)
*It is maybe in fact most efficient performance per watt in the world in terms of 45/40nm chips/SoC's
*Wii U's Power Brick/PSU is rated 75 watts and has efficiency of 90% so it can handle at max 68 watts without serious degrading and since Wii U consumes 30 watts there is available 28 watts though I would dare only to bump power consumption to 60 watts or 30 additional watts if I could increase performance.
*Since maximum power consumption of Wii U is 40 watts and from my calculation GPU consumes roughly mere 10 watts then Wii U's CPU could consume 15 to 20 watts and rest of system around 10 to 15 watts depending on how much Wii U's CPU consumes since it is an unknown factor. I am not counting any possible customizations on the Wii U's GPU, this is all rough estimations and we don't know the whole picture. Wii U is a beast when looking at what nm process was built on and probably most efficient machine on that nm process in the world.
In case you are wondering why some games run on Wii U worse compared 7th generation consoles then I have simple explanation; Wii U hardware is noticeably different than Xbox 360's or PlayStation 3's because their processors and not true CPU's since they can operate GPU related tasks well compared to Wii U that is primarily a CPU. Another reason is that developers don't put spend resources and time on porting the game for Wii U thus it lacks proper optimizations and adaptions of their game engines to Wii U's hardware. Even though some ports perform lower than on 7th generation consoles, in most cases run on higher resolution and/or at Native 720p/HD.
Most if not all 3rd party launch games were made on older Wii U development kits that had 20 to 40% lower clocks than final development kit that Nintendo released near the launch so developers did not had much time to adapt their games to the final devkit thus games were running poorly also games like Darksiders 2, Warriors Orochi 3 Hyper, Batman Arkham City and other were using only one to two cores while third was not used at all or was barely used to aid performance of the game involving CPU related tasks. Since most ports are from Xbox 360 and/or PlayStation 3 versions of the games there are sure to be incompatibilities since Xbox 360 and PlayStation 3 Processors do CPU and also GPU tasks plus GPU's, RAM/Memory, Latency and other factors are different than on Wii U thus optimizations and adaptations is needed though cheap ports as always tend to be a train wreck. Don't you agree?
Xbox 360 and PlayStation 3 will be supported for next three years and this is in a way a negative thing since it can really hold back performance of games on Wii U if those games are going to be mostly port from Xbox 360 and PlayStation 3 since the architecture on these two consoles are vastly different compared to Wii U. We may only see Wii U shine after three years when Xbox 360 and PlayStation 3 stop being supported, those two consoles will hold back development and time spent on Wii U.
Wii U may not be a "leap" as Xbox One or PlayStation 4 though it is a leap over Xbox 360 and PlayStation 3 when looking it very roughly and when taking into account all the implementations, features and "tricks" then the gap is even bigger. Wii U has more embedded RAM than Xbox One that has 32MB of eSRAM while Wii U has 36MB of superior eDRAM for GPU also eDRAM blows away GDDR5 in PlayStation 4 in terms of bandwidth if I am correct? 130/275/550Gbps on Wii U versus 80Gbps on PlayStation 4?
We need to take in consideration that Wii U's CPU Espresso has a certain implementation from Power7 architecture that allows usage of eDRAM and we know that CPU in Wii U has total of 3MB of eDRAM as L2 Cache and it could also use main eDRAM pool as L3 Cache and maintain connection with GPU thus creating HSA/hUMA like capabilities and reducing latency by far between CPU and GPU communications and data transfer.
Wii U's GPU was Radeon HD 4000 series in very first alpha development kit and that was Radeon HD 4850 that was 55nm and not RV740 40nm and by that time of first unveiling of Wii U there was Radeon HD 6000 on the scene for nearly a year or now almost three years so Nintendo could easily switch to Radeon HD 6000 series since it is basically evolution of Radeon HD 5000 that is refinement of Radeon HD 4000 series and further it is supported by using Eyefinity features on Wii U's gamepad that was proven by Unity demo on Wii U and Call Of Duty Black Ops 2 when in co-op in zombie mode also Wii U can stream up to two images at gamepad though it hasn't been used and maybe it will be used in near future.
Also people seem to forget about power consumption of dedicated GPU's versus embedded into motherboard ones that naturally consume less plus Wii U's GPU uses eDRAM that consumes 1 to 2 watts max compared to GDDR5 that consumes around 9 or more watts per 2Gb. GPU in Wii U is embedded thus it does not use PCI-E, additional PCB and chips plus has embedded eDRAM in it so consumption of eDRAM could be negated thus power consumption of eDRAM could be reduced to a literal zero.
Lets take for example Wii U's GPU die size and Radeon HD 6970 die size and assume that Wii U GPU is VLIW 4 based Radeon HD 6000 series GPU and not VLIW5. Radeon HD 6970 consumes 250 watts maximum and has die size of 389mm^2 and 2Gb of GDDR5 and is clocked at 880Mhz. Lets reduce it to 320 SPU's that will use 80mm^2 thus consumption is lowered to roughly 83 watts then we remove 2GB of GDDR5 and it goes down to 70-73 watts, now lets lower down the clock of it from 880mhz to 550mhz so that is roughly 35% lower clocks and when clocks are 25% then power consumption is cut in half since it is 35% so the GPU consumption goes down from 70/73 to roughly 14-15 watts without being embedded and we could easily shave off couple of more watts and it would most likely go down to 10 watts.
We can not really compare Wii U's GPU to any off-shelf/standard dedicated GPU, since some features that are found in regular dedicated GPU's are not needed when embedded thus with some minor modifications I can see Wii U having 384 SPU's thus at 550Mhz should have 420 Gflops rather than 320 SPU's if it was a standard dedicated GPU with die size of it. If Nintendo was to do drastic modifications they could even reach 500SPU's and very close to 600gflops. One of homebrew hackers counted 600 shader's so I am wondering if Nintendo is maybe using one technology that AMD has never used that was from ATI that they also never used and they call it "high density" that is going to be used in AMD's Steamroller cores, from what information AMD released... "High Density" can increase density of the chip by 30% and reduce size by 30% and reduce power consumption.
*I won't link most of this information's though I can assure you I did a lot of research and digging on the internet, collecting bits and pieces and then putting the together into one picture thus I had felling that is called "a ha!" or "EUREKA!!"