R Read - "Our semi-custom APUs" = Xbox 720 + PS4?

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tuna-Fish

Golden Member
Mar 4, 2011
1,646
2,464
136
Sorry, missed your post originally.

http://www.top500.org/
It isn't ideal, but when comparing cross platform architectures it is the easiest tool we have. And yes, not only is it used, it is the industry standard.

No. The industry standard for comparing CPU performance is SPEC. FLOPS is used for supercomputers and other stream-like computing, which is not the same at all as CPU speed. (It is sort of useful metric for GPU speed, but even there it is not the sole defining criteria.)

At the clockspeeds you quoted, using your guidelines, that makes Jaguar barely faster then Cell, seven years later. That is a good generational leap?

If CPU speed is what you are interested in, the correct thing in cell to compare against is the single PPE. The SPEs are not CPUs, or really even parts of CPUs. They are closer to the vector elements from PS2 than they are to an actual CPU. At what they do, the new system would use GCN SPs.

That doesn't take away the lack of progress for everything besides shader hardware.

Other than the humongous leap in geometry performance?

The rest scales with screen area, not scene complexity. And since they are not aiming above 1080p, there is no reason to go for more.
 

NTMBK

Lifer
Nov 14, 2011
10,411
5,677
136
:rolleyes: I've probably been doing that since b/4 1995, big deal. Are you the Microsoft police?

Well I can only remember about as far back as '95, so hey :)

Don't you know? Microsoft are cool now because they bought out a decent product recently (W7) and are the underdog when it comes to MP3 players, phones and tablets. You should be typing "MS" instead of "M$" and Apple as "$$$Apple$$$moneygrabbing*****$$$$".

Oh and Android as "thou-can't-do-no-wrongeth".

:biggrin:

Yeah, I know they're all as bad as each other :p It's just this ridiculous "M$", "Crapple", "iPoop", "Samesung" playground namecalling that goes on is infantile.
 

anongineer

Member
Oct 16, 2012
25
0
0
As I mentioned above, just because AMD's release of Kaveri is delayed, doesn't mean custom silicon for M$, built off Kaveri's design, is delayed. If M$ is footing the bill for production, then there is now cash for AMD in terms of getting M$s console chip shipped.

I meant to say that, if both designs target the same process, then they will inevitably compete with each other for fab capacity, and delaying one so that both need to ramp production in the same time frame will only exacerbate the problem.
 

psoomah

Senior member
May 13, 2010
416
0
0
I meant to say that, if both designs target the same process, then they will inevitably compete with each other for fab capacity, and delaying one so that both need to ramp production in the same time frame will only exacerbate the problem.

Hence a Trinity refresh in 2Q 2013. A successful Xbox 360 launch is likely more vital to AMD's future plans than a few months delay of Kaveri.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
No. The industry standard for comparing CPU performance is SPEC.

Holy ignore the majority of the computing world. About 2 million computing platforms are being sold *per day* that SPEC won't run on in one market segment. This year all told we are going to be in the 750 Million consumer computer devices that SPEC does not support/can't be compiled for. That is quite a bit larger then the entire PC market. SPEC is a cute little niche bench for marginal markets, it isn't in the league of being a viable cross platform benchmark for CPUs overall. SPEC is a nice little benchmark for a small part of the computing world, quaint, outdated and of little relevance in comparing modern computing platforms.

Other than the humongous leap in geometry performance?

A ~90% improvement in geometry performance for a console generation qualifies as *catastrophic*.

You're missing rather basic problems that the Cell exacerbates

We aren't discussing transistor optimization in this thread, we already had that discussion, you concluded the entire world should pay billions to make your life easier, in fact I think you said it was the only reasonable approach? That is a discussion that won't get us anywhere, you think I should pay a Cerb tax so you can take it easy, I don't, we won't ever agree :)
 

Bobisuruncle54

Senior member
Oct 19, 2011
333
0
0
Holy ignore the majority of the computing world. About 2 million computing platforms are being sold *per day* that SPEC won't run on in one market segment. This year all told we are going to be in the 750 Million consumer computer devices that SPEC does not support/can't be compiled for. That is quite a bit larger then the entire PC market. SPEC is a cute little niche bench for marginal markets, it isn't in the league of being a viable cross platform benchmark for CPUs overall. SPEC is a nice little benchmark for a small part of the computing world, quaint, outdated and of little relevance in comparing modern computing platforms.



A ~90% improvement in geometry performance for a console generation qualifies as *catastrophic*.



We aren't discussing transistor optimization in this thread, we already had that discussion, you concluded the entire world should pay billions to make your life easier, in fact I think you said it was the only reasonable approach? That is a discussion that won't get us anywhere, you think I should pay a Cerb tax so you can take it easy, I don't, we won't ever agree :)

Nobody really agrees with you because you're flat out wrong in many cases. But, promote Cell all you want, nobody takes you seriously on it.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
But, promote Cell all you want, nobody takes you seriously on it.

It has nothing to do with Cell per se, it has to do with a compute architecture direction. More resources devoted to computational throughput. I also have been a big supporter of GPGPU too, same principles.

Nobody really agrees with you because you're flat out wrong in many cases.

History will decide that. Thirteen years of archives on this forum, I could write an extremely lengthy post about all the popular idiocy I openly mocked and people said I was wrong. Actually, you people are doing it now. I have been saying for years that from a perf/mm perspective Cell's approach is *by far* the best one to take. nVidia has obviously agreed for years, Intel is on board and AMD has come around too. Of course, I clearly must be flat out wrong due to you guys not liking Cell. That is why all of these companies are dropping billions of dollars building devices that emulate the compute centric nature of Cell at the expense of code flexibility. I've been saying it for years, XeonPhi, the Radeon direction, Tesla- they all must be wrong too.
 
Last edited:

Ajay

Lifer
Jan 8, 2001
16,094
8,112
136
Well I can only remember about as far back as '95, so hey :)


Yeah, I know they're all as bad as each other :p It's just this ridiculous "M$", "Crapple", "iPoop", "Samesung" playground namecalling that goes on is infantile.

Well, for me it's force of habit, I have been using 'MS' more frequently - still M$ just flies off the keyboard; what is that, muscle memory or something like it.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
We aren't discussing transistor optimization in this thread, we already had that discussion, you concluded the entire world should pay billions to make your life easier, in fact I think you said it was the only reasonable approach? That is a discussion that won't get us anywhere, you think I should pay a Cerb tax so you can take it easy, I don't, we won't ever agree :)
Why do you think it's being put in GPUs, now? The tax is not having it. It's entirely the opposite from the way you want it to be. Computational throughput that gets wasted, because it's only halfway useful for 0.001% of problems, may as well not have existed. nVidia and AMD realize this, FI, and have been working precisely to help fix it, including by doing just what you keep saying takes too much space: gradually implementing real virtual memory.

I certainly have no clout at nVidia, nor AMD. So, why are they doing what I think is good? :hmm: Maybe because >95% of the rest of the computing world realizes that increasing the ability to effectively utilize memory is better than throwing more paper-performance-only arithmetic units at problems. Nobody else wants to go backwards like that. Not merely that, but they aren't. I'm stating how things are, and how they have been.

That is why all of these companies are dropping billions of dollars building devices that emulate the compute centric nature of Cell at the expense of code flexibility. I've been saying it for years, XeonPhi, the Radeon direction, Tesla- they all must be wrong too.
No, they're exactly right. They are all are increasing flexibility, when they could have just kept adding FUs.

* Xeon Phi: Each CPU core not only has a normal x86 MMU, capable of running most regular x86 code, but the whole device is cache-coherent!
* Radeon: GCN has begun to implement virtual memory, has a more versatile memory ordering model, and decoupled scalar/vector EUs.
* Geforce/Quadro/Tesla: Fermi started with address translation, and Kepler has better virtual memory support than GCN. Also, Fermi and Kepler are cache-coherent within each SM.

GCN, Fermi, and Kepelr also support indirect memory accesses (only for virtual functions, I think, right now). Also, nVidia is working towards integrating CPU cores into their high-end GPUs, by all accounts.

That is not the direction of the Cell. The Cell was very Itanium-like, trying to push off the complexity where it doesn't belong: code and data stuffed into slow memory. Programmers and compilers can only do so much, with the memory wall in place. Hardware must take up the slack, because of not, they hardware will just be twiddling its thumbs, most of the time. The whole, "the best programmers can just do this this and this," is and has been a myth. The reality is that on occasion, the best programmers can find a few areas that can be optimized by manual memory control. However, the other 99.999% of code will work out better with hardware translation and automatic heuristic-based speculation schemes (whether HW or SW).
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Hence a Trinity refresh in 2Q 2013. A successful Xbox 360 launch is likely more vital to AMD's future plans than a few months delay of Kaveri.

AMD and NV don't make a lot of $ from console design wins. Even if AMD won all 3 next generation consoles, it's not as if it can mask their CPU non-competitiveness and lack of tablet/smartphone CPU design wins. Design wins in next gen consoles have almost no impact on AMD's survival. Chances are AMD had to undercut NV as well to win these designs which means they are probably making peanuts from each next gen console sold.

I still don't understand why you think they'd only use a Kaveri APU and that's it. That's only 512 Shaders. Also, a Kaveri APU will be $100-130 in retail which means Sony/MS can get them for way less from AMD in volumes. If they only include an APU, what else are they going to spend their $ on? There is no way Sony's budget for CPU+GPU is just $130, unless you think they are launching PS4 at $249. MS spend $141 in AMD GPU for Xbox360 on its own! The CPU+GPU budget for Xbox360 was nearly 50% of the BOM for the console:

xbox360_bom.gif


Now you are saying both companies will only include an APU for $100-130 to cover both the CPU and GPU side? If MS and Sony do this, they might as well quit making consoles. Without a large generational leap, people are not going to be able to see much difference in graphics between a PS4/next xbox over the much cheaper PS3/360. For this reason MS and Sony have to offer a generational leap and a Kaveri APU is not enough imo.

Furthermore, if one of them uses Kaveri, and the other goes for a dedicated GPU, it's literally game over for the console that went with the APU only design. MS and Sony are not like Nintendo - they both cater to same group of hardcore gamers. If one of those consoles has a significant edge on the graphical side, it almost instantly makes the other console redundant, especially for MS as their 1st party support and exclusive line-up is very poor. A next gen console with Kaveri against a next gen console with a dedicated mid-range HD7000 GPU is going to be 2-3x slower. If anything, Sony has a huge advantage here. BluRay prices are not what they used to be during 2006 and MS has to spend a part of their console budget on integrating Kinect.

Sony = AMD APU + discrete GPU
MS = PowerPC CPU + discrete GPU + kinect (likely means the discrete GPU in MS's console will be worse for cost cutting reasons or MS will take a margin cut), unless MS is going with AMD APU and ditching IBM's Power PC entirely.

If MS and Sony go exclusively for an off-the-shelf 512 shader Kaveri APU design, it'll be a disaster for PC gaming and mean that those consoles are instantly obsolete. If that's true, might as well call all 3 next generation consoles a disappointment.
 
Last edited:

Maragark

Member
Oct 2, 2012
124
0
0
I still don't understand why you think they'd only use a Kaveri APU and that's it. That's only 512 Shaders. Also, a Kaveri APU will be $100-130 in retail which means Sony/MS can get them for way less from AMD in volumes. If they only include an APU, what else are they going to spend their $ on? There is no way Sony's budget for CPU+GPU is just $130, unless you think they are launching PS4 at $249. MS spend $141 in AMD GPU for Xbox360 on its own!

It's extremely unlikely that Kaveri will be using 512 stream processors (8 CUs) as that's what the HD 7770 uses. Taking the advantages of the GCN architecture into account and the latest driver update, Trinity's performance is somewhere between 4 and 5 GCN CUs. Kaveri will most likely have 6 CUs (384 stream processors, just like Trinity) for the top end and 4 CUs for the lower end models. The 6 CU parts will have a performance somewhere between the HD 6670 and the HD 6770, ignoring the lack of GDD5.

No matter what the next gen consoles are like, they'll be going up against mini-ITX budget gaming PCs costing around $300 - $400 that can game at 1080P using medium settings.
 

jpiniero

Lifer
Oct 1, 2010
16,493
6,987
136
Given all the talk about the 6670, it's probably closer to it (480 shaders). That's still a large improvement from the current gen (Xenos is 48 btw). This may still end up being the last console generation, but it won't be because of perceived small improvement.

There is no way Sony's budget for CPU+GPU is just $130, unless you think they are launching PS4 at $249

It might even be $199. And they can't take much of a loss on it either.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
It's extremely unlikely that Kaveri will be using 512 stream processors (8 CUs) as that's what the HD 7770 uses. Taking the advantages of the GCN architecture into account and the latest driver update, Trinity's performance is somewhere between 4 and 5 GCN CUs. Kaveri will most likely have 6 CUs (384 stream processors, just like Trinity) for the top end and 4 CUs for the lower end models.

I was just using Kaveri's roadmap where the top part shows 512 shaders. It might be less but I am going with latest info on that design since it's an AMD leak.

"Documents from AMD have revealed specifications for Kaveri, which at first sight reminds us of a recently launched circuit."
http://www.nordichardware.com/news/...s-performance-on-par-with-radeon-hd-7750.html

The problem with the Kaveri APU theory in 2013 is that Streamroller CPU cores may not even be launched in volumes in 2013, which means Kaveri might be delayed to 2014. How will they produce millions of consoles for Holiday 2013 launch with Streamroller APU cores?

It might even be $199. And they can't take much of a loss on it either.

So you think MS and Sony will undercut Wii U by $100-150, while offering superior graphics and performance? How is that going to happen? Nintendo already confirmed that they are selling Wii U at a loss. I realize the Wii U controller is expensive but so is integrating Kinect into every Xbox.

Currently MS sells the Xbox 360 4GB Kinect bundle for $249 and 250GB Kinect bundle for $349. I would expect the next generation Xbox to cost at least $299, but more likely $349-449.

Not sure I believe the HD6670 rumors anymore. HD7750 is already just $90 in retail (that means retailers and AIBs get a cut). I bet MS and Sony would be able to buy a GPU of this level for $50 cost by Q4 2013.

PowerColor even has a smaller desktop HD7750 1GB variant. Now imagine this GPU in mobile form - it would be so easy to fit into a next generation console.

show_img.asp


640 Shader HD7850M (Heathrow Pro) has a mere 32W of power consumption. If MS and Sony use HD6670 instead of at least HD7850M, the people in charge of these consoles should quit their jobs.
 
Last edited:

Maragark

Member
Oct 2, 2012
124
0
0

jpiniero

Lifer
Oct 1, 2010
16,493
6,987
136
So you think MS and Sony will undercut Wii U by $100-150, while offering superior graphics and performance? How is that going to happen? Nintendo already confirmed that they are selling Wii U at a loss. I realize the Wii U controller is expensive but so is integrating Kinect into every Xbox.

Most of the reports I've seen suggest that the controller costs $100 or more. I think you will see Nintendo drop the controller bundle by the time the 720/PS4 arrive and get the price down. Plus Nintendo is using a two chip design, an APU would presumably be cheaper.

If MS does bundle Kinect at the lowest level, yeah, it won't be $199. But they are going to put the processing on the CPU, so that will lower the cost of Kinect itself.

How will they produce millions of consoles for Holiday 2013 launch with Streamroller APU cores?

Certainly there should be some skepticism that AMD can actually deliver.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
jpiniero,

If we look at PS Vita, Sony used a 4-core SGX543MP4+ when they launched it in Feb 2012. Apple just launched iPad 4 with PowerVR SGX543MP4. Given that alone, it shows us Sony used top-of-the line mobile GPU given the size of PS Vita in their last console launch. I realize it's a portable console but it shows that Sony went for cream of the crop at that time.

Given that Sony's gaming division is one of the 3 pillars where Sony is committed to make $ and grow, it would make more sense strategically that Sony will actually spend more $ on PS4 since that's actually a part of the company which is doing well relative to other areas. I can't see PS4 having a gimped GPU. Sure, it probably won't be an HD7950 1792 shader GPU for cost and power consumption reasons, but I think assuming a bottom of the barrel HD6670 level of GPU is also too conservative the other way. Perhaps somewhere in-between is more realistic as Sony will want to balance having a technology lead over MS and Nintendo and having learned from PS3, they surely know that a weak GPU and too fast CPU in PS3 was not a good balanced approach. Since BluRay prices have dropped significantly and an AMD CPU would cost a lot less than Cell 2.0 to Sony, it gives them a lot of budget room to focus on the GPU side to make sure they deliver this time. If all 3 next generation consoles use AMD's GPUs, it'll be extremely easy for us to compare the GPU processing power :). This would be a very unique situation in console history as we'll be able to do a straight up GPU performance comparison without much ambiguity as we know how HD4000-7000 series stack up relative to each other with a very high level of accuracy based on hundreds of professional reviews.
 
Last edited:

NTMBK

Lifer
Nov 14, 2011
10,411
5,677
136
I still don't understand why you think they'd only use a Kaveri APU and that's it. That's only 512 Shaders.

Given that the concall referred to "semi-custom APUs", I also strongly doubt that they would use a straight Kaveri! To me semi-custom would indicate using the same building blocks as their normal APUs (Jaguar or Steamroller cores, and GCN CUs) in a different configuration- hopefully more CUs, and higher memory bandwidth.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Since BluRay prices have dropped significantly and an AMD CPU would cost a lot less than Cell 2.0 to Sony

Is AMD really going to be willing to give up all profits for Sony? x86 is still huge compared to POWER, and an APU compounds that quite a bit. It also would present a rather large problem for Sony if they were the only x86 machine, they could have a CPU that is too weak for developers to bother to waste their time with if MS does go POWER. On top of the computational limitations, you also deal with big endian vs little endian.

http://www.techspot.com/review/577-borderlands-2-performance/page6.html

That's a console port to x86 architecture, spotting x86 seven years of additional development. If *both* MS and Sony go with x86 they can get away with it as both companies will have very weak CPUs so they will be on a level playing field. If MS does go POWER, then I don't see any way Sony wouldn't do the same. This isn't the same as Cell versus the tri POWER design in the 360, the disparity in performance would be *huge*.

No matter how good your GPU is, if your CPU can only push out 4 frames per second it just isn't going to matter.

BTW- In relative terms, we are still months away from when Sony made massive design changes to their last console. There is *zero* reason to believe any of the specs have been finalized for either company.
 

anongineer

Member
Oct 16, 2012
25
0
0
What is the last possible moment for MS to finalize their SOC specs, and not pay an arm and a leg to rush it through verification, physical design, tapeout, manufacturing, bring-up, driver development, and getting kits to developers so that games are polished enough for launch?
 

psoomah

Senior member
May 13, 2010
416
0
0
What is the last possible moment for MS to finalize their SOC specs, and not pay an arm and a leg to rush it through verification, physical design, tapeout, manufacturing, bring-up, driver development, and getting kits to developers so that games are polished enough for launch?

According to Charlie D Microsoft has has had working silicon in hand from test wafers for a while from GF, TSMC and Samsung. The problem has been very poor yields which Microsoft is gambling on being worked out by at least one of the foundries in time for a 2013 launch. If all three get yields well up, one might expect a shortage free and quite spectacular launch.

If Charlie D's info is at least semi-accurate Microsoft is extremely serious about a 2013 console launch. Since there have been no known leaks regarding Sony having working silicon in hand or even having completed tapeout, a PS4 2013 launch is looking decidedly sketchier.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
If Charlie D's info is at least semi-accurate Microsoft is extremely serious about a 2013 console launch. Since there have been no known leaks regarding Sony having working silicon in hand or even having completed tapeout, a PS4 2013 launch is looking decidedly sketchier.

Sony changed their entire console setup around ~9 months prior to launch last generation. Also, Sony owns their own fabs, they don't need to have parts moving through a bunch of different hands(increasing possibility of leaks).
 

psoomah

Senior member
May 13, 2010
416
0
0
Sony changed their entire console setup around ~9 months prior to launch last generation. Also, Sony owns their own fabs, they don't need to have parts moving through a bunch of different hands(increasing possibility of leaks).

Some reason in particular you're choosing to spew easily provable nonsense in lieu of a reasoned fact based rebuttal?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Some reason in particular you're choosing to spew easily provable nonsense in lieu of a reasoned fact based rebuttal?

The PS3 was supposed to be a dual Cell machine, the addition of a GPU was late in the game(double checked, was 9 months from initial launch plans).

As far as Sony owning their own fabs, it isn't exactly a secret?

http://www.xbitlabs.com/news/other/...fer_Nagasaki_Semiconductor_Plant_to_Sony.html

Sony can make all of their chips entirely in house if they so choose.
 

psoomah

Senior member
May 13, 2010
416
0
0