NostaSeronx
Diamond Member
- Sep 18, 2011
- 3,811
- 1,290
- 136
Microsoft(Particular Needs & Wants) -> AMD(Seperate CPU & GPU development) -> IBM(CPU & GPU glue) -> GlobalFoundries(Main foundry)
^-- What I heard.
^-- What I heard.
See, reasons that can be addressed other than "because the slide says so, it is good!"
A) What makes that compelling now when in the past it has not?
B) Same as A
C) See A
D) This doesn't really fly. AMD and Intel haven't been drop in replacements for one another since the 486 (or maybe Pentium) days.
E) Same as A again
F) I will agree with the spirit, but MS, Sony, Nintendo, etc have never seen this as a tipping pont
G) This did not hold true for the Xbox, why would it now?
H) Depends on money spent, as with most things.
All those reasons are good reasons, I'll admit, but they have never been reasons that mattered enough to drive the CPU choice for a given console. What do you feel is suddenly different where they matter?
I still don't get, who started that stupid "Sony will go x86 and have an AMD GPU" rumor. AFAIK there haven't been any, even slightly credible sources to it and it just seems stupid:
"Hey, we're late to the party ? Let's team up with our compeditors supplier and longtime partner (in other areas). And while we are at it, let's build our consoles on ISA of which we have no knowledge."
Thats so fake it hurts. iGPU with 1843Gflops plus a HD7970. Sure thing.
Not to mention their "PS4" will use around 350-400W. Thats a new record for consoles too.
1152 Single Precision(32-bit) * 2 FMA * 0.8 GHz = 1,843.2 GFlops.1152 GCN 2.0 ALUs
1152 Single Precision(32-bit) * 2 FMA * 0.8 GHz = 1,843.2 GFlops.
Coincidence? The only answer I can come up with is "Aliens."
You sound like piracy dosn't exist on consoles...nice fallacy.
Microsoft(Particular Needs & Wants) -> AMD(Seperate CPU & GPU development) -> IBM(CPU & GPU glue) -> GlobalFoundries(Main foundry)
^-- What I heard.
You mean separate chips on a transposer? Would be awesome if true and certainly explain the yields, but I didn't think the tech for that was production ready?
AMD would probably nearly give away the designs if that would mean a large boost to HSA adoption and a chance to play with transposers with someone else footing the bill.
but, i do enjoy talking about them...
consoles tend to be very well studied arquitectures...
a good example is the infamous amd's flickering, it was not a driver problem, it was a hardware bug in it's VLIW arquitecture....the bug seems corrected at the GCN![]()
Considering MS wants to push Windows8 interface as a universal interface, they really only have two options:
ARM
x86
You think an ARM xbox is gonna happen? If so, it's gonna be worse for game quality than any x86 option.
Sony, hopefully, has learned not to screw too much with developers and will come out with hardware that is at least somewhat similar to something else so developers don't need a PhD in PS4 to get decent performance out of it.
I hope both have learned the pitfalls of being skimpy on the memory. I think that at least as limiting, if not more, to the current consoles than the CPu / GPU performance. You hear developers all the time bitching about fitting things into the 512 MB CPU / 512 MB GPU of the PS3 and even some who bitch about the 1GB shared of the Xbox.
No, the chip is going to be on the same die. IBM is the one creating the interconnect between the CPU & GPU.You mean separate chips on a transposer? Would be awesome if true and certainly explain the yields, but I didn't think the tech for that was production ready?
AMD would probably nearly give away the designs if that would mean a large boost to HSA adoption and a chance to play with transposers with someone else footing the bill.
To seperate things:
AMD Sony x86-64 ISA, RISC
AMD Microsoft x86-64 ISA, RISC
AMD64, CISC
Intel 64, CISC
Intel MIC, RISC
If it is AMD and if it uses Steamroller or Jaguar SSE4.1/SSE4.2/AVX/AVX2 aren't needed.I don't get this part. Isn't x86-64 CISC? Also, isn't MIC CISC also?
If you are making a custom chip for a custom operating system then you don't need overlapping instruction set extensions, it's a waste of space and performance.
RISC is 30%(aggregate) better than CISC.
I understand that consoles don't need to be feature complete with respect to the x86 ISA in order to have fast performance in videogames, but that doesn't change whether an x86-based chip is CISC/RISC. You'd have to strip out an awful lot from x86, and then replace it with an awful lot (including the most common operations), in order to transform it from CISC->RISC, and by that point you couldn't really call it x86 any more, because it would be an all-new ISA. Intel/AMD chips are already internally-RISC (after translation from CISC in the decode stage), so I don't see what you're really trying to do with this.
I don't know about that. My background at an ISA level is more with RISC, but do modern compilers really spit out allot of x86 macro-ops?
In x86, the most common instructions (in % of compiler-emitted code, for example) are also the shortest instructions (in number of bytes). These common instructions are even shorter than their RISC counterparts (discounting THUMB, I guess). Even "simple" x86 instructions, like "add a[5], b[c]" (pseudo x86, obviously), would be broken up into several RISC instructions (in that case, something like ld r1,a[5]; ld r2,b[c]; add r3,r1,r2; st a[5],r3; so one extremely common x86 instruction can easily become 4+ RISC instructions). This kind of code is emitted all the time by compilers.
A modern x86 CPU will read the short "add a[5], b[c]" instruction and then internally change it into a series of RISC like-instructions, including register renaming (because 8 architected registers isn't enough to get high ILP), and then run through the typical OoO engine and functional units, which are common to in RISC machines also.
What you're alluding to, I think, is that the distinction of RISC vs CISC is no longer a meaningful distinction. Pretty much everything out there now is a hybrid of some sort.