S|A : "Microsoft XBox Next will use an x86 AMD APU instead of PowerPC"

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136
Microsoft(Particular Needs & Wants) -> AMD(Seperate CPU & GPU development) -> IBM(CPU & GPU glue) -> GlobalFoundries(Main foundry)

^-- What I heard.
 

StrangerGuy

Diamond Member
May 9, 2004
8,443
124
106
I would rather see an AMD APU in a next gen console than yet another overhyped, overengineered, overpriced, underperforming PowerPC + PC GPU frankenstein.
 

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
See, reasons that can be addressed other than "because the slide says so, it is good!"

A) What makes that compelling now when in the past it has not?
B) Same as A
C) See A
D) This doesn't really fly. AMD and Intel haven't been drop in replacements for one another since the 486 (or maybe Pentium) days.
E) Same as A again
F) I will agree with the spirit, but MS, Sony, Nintendo, etc have never seen this as a tipping pont
G) This did not hold true for the Xbox, why would it now?
H) Depends on money spent, as with most things.

All those reasons are good reasons, I'll admit, but they have never been reasons that mattered enough to drive the CPU choice for a given console. What do you feel is suddenly different where they matter?

Okay here is my rebuttal for your rebuttal to A:

Microsoft is going x86 for the XBox Next, and by all accounts, so is Sony.

There goes half your argument unless you can prove that all of the rumours are false.
 

Gideon

Platinum Member
Nov 27, 2007
2,030
5,035
136
I still don't get, who started that stupid "Sony will go x86 and have an AMD GPU" rumor. AFAIK there haven't been any, even slightly credible sources to it and it just seems stupid:

"Hey, we're late to the party ? Let's team up with our compeditors supplier and longtime partner (in other areas). And while we are at it, let's build our consoles on ISA of which we have no knowledge."
 

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
I still don't get, who started that stupid "Sony will go x86 and have an AMD GPU" rumor. AFAIK there haven't been any, even slightly credible sources to it and it just seems stupid:

"Hey, we're late to the party ? Let's team up with our compeditors supplier and longtime partner (in other areas). And while we are at it, let's build our consoles on ISA of which we have no knowledge."

Here we go
 

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
Thats so fake it hurts. iGPU with 1843Gflops plus a HD7970. Sure thing.

Not to mention their "PS4" will use around 350-400W. Thats a new record for consoles too.

1843 GFlops is only slightly higher than a 7850 - source

There was a better article I think at SemiAccurate or one of the other rumour sites - Charlie or something I think. Cant remember.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136
1152 GCN 2.0 ALUs
1152 Single Precision(32-bit) * 2 FMA * 0.8 GHz = 1,843.2 GFlops.
Coincidence? The only answer I can come up with is "Aliens."

150 Watts for CPU.
150 Watts for GPU.
50 Watts for misc.

~350 Watts

The playstation3 cpu & gpu had a TDP of 330W total but largely it never used that much because there was no FurMark or virus benchmark for the Cell.

I'm going to guess because the hardest thing a PS4 can run is a game it will be 210 Watts of power usage.
 
Last edited:

NTMBK

Lifer
Nov 14, 2011
10,450
5,833
136
1152 Single Precision(32-bit) * 2 FMA * 0.8 GHz = 1,843.2 GFlops.
Coincidence? The only answer I can come up with is "Aliens."

ancient-aliens-guy-big-hair-giorgio-tsoukalos.jpeg
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
You sound like piracy dosn't exist on consoles...nice fallacy.

ubisoft wants to talk to you :p

don't get me wrong, i don't like consoles too...for the exact same reason...but they aren't the only problem for bad console ports

but, i do enjoy talking about them...
consoles tend to be very well studied arquitectures...

a good example is the infamous amd's flickering, it was not a driver problem, it was a hardware bug in it's VLIW arquitecture....the bug seems corrected at the GCN :)
 

Arzachel

Senior member
Apr 7, 2011
903
76
91
Microsoft(Particular Needs & Wants) -> AMD(Seperate CPU & GPU development) -> IBM(CPU & GPU glue) -> GlobalFoundries(Main foundry)

^-- What I heard.

You mean separate chips on a transposer? Would be awesome if true and certainly explain the yields, but I didn't think the tech for that was production ready?

AMD would probably nearly give away the designs if that would mean a large boost to HSA adoption and a chance to play with transposers with someone else footing the bill.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
but, i do enjoy talking about them...
consoles tend to be very well studied arquitectures...

a good example is the infamous amd's flickering, it was not a driver problem, it was a hardware bug in it's VLIW arquitecture....the bug seems corrected at the GCN :)

Good point, otherwise, I too h8t consoles, but I hate console ports even more :'(
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
Considering MS wants to push Windows8 interface as a universal interface, they really only have two options:

ARM
x86

You think an ARM xbox is gonna happen? If so, it's gonna be worse for game quality than any x86 option.

Sony, hopefully, has learned not to screw too much with developers and will come out with hardware that is at least somewhat similar to something else so developers don't need a PhD in PS4 to get decent performance out of it.

I hope both have learned the pitfalls of being skimpy on the memory. I think that at least as limiting, if not more, to the current consoles than the CPu / GPU performance. You hear developers all the time bitching about fitting things into the 512 MB CPU / 512 MB GPU of the PS3 and even some who bitch about the 1GB shared of the Xbox.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Considering MS wants to push Windows8 interface as a universal interface, they really only have two options:

ARM
x86

You think an ARM xbox is gonna happen? If so, it's gonna be worse for game quality than any x86 option.

Sony, hopefully, has learned not to screw too much with developers and will come out with hardware that is at least somewhat similar to something else so developers don't need a PhD in PS4 to get decent performance out of it.

I hope both have learned the pitfalls of being skimpy on the memory. I think that at least as limiting, if not more, to the current consoles than the CPu / GPU performance. You hear developers all the time bitching about fitting things into the 512 MB CPU / 512 MB GPU of the PS3 and even some who bitch about the 1GB shared of the Xbox.

And if Sony didn't learn their lesson, Gran Turismo 6 might get released for the PS4 when the PS5 starts shipping.
 

IlllI

Diamond Member
Feb 12, 2002
4,927
11
81
i'm all for supporting an x86 console if it means console ports will suck less than they do now.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
I don't know what GDDR5 is going for per Gbit, but I imagine we'll be seeing at least 2GB as someone mentioned in an earlier post. Of course, I have no idea if either company is looking at an MCM with on board dram, they could get away with DDR3 and a fat interface that way.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136
You mean separate chips on a transposer? Would be awesome if true and certainly explain the yields, but I didn't think the tech for that was production ready?

AMD would probably nearly give away the designs if that would mean a large boost to HSA adoption and a chance to play with transposers with someone else footing the bill.
No, the chip is going to be on the same die. IBM is the one creating the interconnect between the CPU & GPU.

--
The instruction set for the Sony and Microsoft consoles are incompatible with each other and incompatible with the chips on the Windows & Linux side of things.

To seperate things:
AMD Sony x86-64 ISA, RISC
AMD Microsoft x86-64 ISA, RISC
AMD64, CISC
Intel 64, CISC
Intel MIC, RISC
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136
I don't get this part. Isn't x86-64 CISC? Also, isn't MIC CISC also?
If it is AMD and if it uses Steamroller or Jaguar SSE4.1/SSE4.2/AVX/AVX2 aren't needed.

MIC only has ZMM registers so you can't do operations that use MMX, XMM, or YMM registers.

x86-64 = x86, x64, x87, MMX, SSE, SSE2 <-- you need to support this as minimum amount to be considered x86-64
x86 MIC = some x86/x64, x87, AVX3
Sony = x86, x64, SSE2, SSE3, XOP.
Microsoft = x86, x64, SSE2, SSE4.1, AVX.

If you are making a custom chip for a custom operating system then you don't need overlapping instruction set extensions, it's a waste of space and performance.

RISC is 30%(aggregate) better than CISC.
 
Last edited:

sefsefsefsef

Senior member
Jun 21, 2007
218
1
71
If you are making a custom chip for a custom operating system then you don't need overlapping instruction set extensions, it's a waste of space and performance.

RISC is 30%(aggregate) better than CISC.

I understand that consoles don't need to be feature complete with respect to the x86 ISA in order to have fast performance in videogames, but that doesn't change whether an x86-based chip is CISC/RISC. You'd have to strip out an awful lot from x86, and then replace it with an awful lot (including the most common operations), in order to transform it from CISC->RISC, and by that point you couldn't really call it x86 any more, because it would be an all-new ISA. Intel/AMD chips are already internally-RISC (after translation from CISC in the decode stage), so I don't see what you're really trying to do with this.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
I understand that consoles don't need to be feature complete with respect to the x86 ISA in order to have fast performance in videogames, but that doesn't change whether an x86-based chip is CISC/RISC. You'd have to strip out an awful lot from x86, and then replace it with an awful lot (including the most common operations), in order to transform it from CISC->RISC, and by that point you couldn't really call it x86 any more, because it would be an all-new ISA. Intel/AMD chips are already internally-RISC (after translation from CISC in the decode stage), so I don't see what you're really trying to do with this.

I don't know about that. My background at an ISA level is more with RISC, but do modern compilers really spit out allot of x86 macro-ops?

Andy Grove, IIRC, mentioned that the space needed for x86 decoders would continue to shrink relative to the total number of xtors used as time went on, and that would be the reason that x86 would be able to beat RISC (that along with the fact that the RISC market was smaller than the x86 market at least in units of CPUs sold). This seems to largely have come true, even though there is allot of emphasis still placed on decoders.
 

sefsefsefsef

Senior member
Jun 21, 2007
218
1
71
I don't know about that. My background at an ISA level is more with RISC, but do modern compilers really spit out allot of x86 macro-ops?

In x86, the most common instructions (in % of compiler-emitted code, for example) are also the shortest instructions (in number of bytes). These common instructions are even shorter than their RISC counterparts (discounting THUMB, I guess). Even "simple" x86 instructions, like "add a[5], b[c]" (pseudo x86, obviously), would be broken up into several RISC instructions (in that case, something like: ld r1,a[5]; ld r2,b[c]; add r3,r1,r2; st a[5],r3; so one extremely common x86 instruction can easily become 4+ RISC instructions). This kind of code is emitted all the time by compilers.

A modern x86 CPU will read the short "add a[5], b[c]" instruction and then internally change it into a series of RISC like-instructions, including register renaming (because 8 architected registers isn't enough to get high ILP), and then run through the typical OoO engine and functional units, which are common in RISC machines also.
 
Last edited:

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
In x86, the most common instructions (in % of compiler-emitted code, for example) are also the shortest instructions (in number of bytes). These common instructions are even shorter than their RISC counterparts (discounting THUMB, I guess). Even "simple" x86 instructions, like "add a[5], b[c]" (pseudo x86, obviously), would be broken up into several RISC instructions (in that case, something like ld r1,a[5]; ld r2,b[c]; add r3,r1,r2; st a[5],r3; so one extremely common x86 instruction can easily become 4+ RISC instructions). This kind of code is emitted all the time by compilers.

A modern x86 CPU will read the short "add a[5], b[c]" instruction and then internally change it into a series of RISC like-instructions, including register renaming (because 8 architected registers isn't enough to get high ILP), and then run through the typical OoO engine and functional units, which are common to in RISC machines also.


What you're alluding to, I think, is that the distinction of RISC vs CISC is no longer a meaningful distinction. Pretty much everything out there now is a hybrid of some sort.
 

sefsefsefsef

Senior member
Jun 21, 2007
218
1
71
What you're alluding to, I think, is that the distinction of RISC vs CISC is no longer a meaningful distinction. Pretty much everything out there now is a hybrid of some sort.

The microarchitecture is definitely a hybrid, but the ISA is still strictly CISC. All the short instructions were used up years ago, so whenever they add new instructions (like AVX2) they have to use longer and longer instruction formats.