The real reasons Microsoft and Sony chose AMD for consoles [F]

Page 16 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
I swear, will you guys ever stop quoting this disproven figure for the gt3 gpu, even counting for the 84mm^2 edram die (which even Intel admits is overkill), the total gpu logic should be more like 190mm^2

And even if you doubled it you wouldn't have the gaming performance of a 7870.
 

insertcarehere

Senior member
Jan 17, 2013
712
701
136
And even if you doubled it you wouldn't have the gaming performance of a 7870.

Whether such a hypothetical gpu may match the gaming performance of a 7870 is another question (the PS4/Xone certainly don't and doubling the edram would make no sense), but I just don't like people using obviously wrong figures to further their own agenda.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
Companies use engineering sales teams all the time. Think "salesmen with some computer/engineering background" they usually don't work directly on the product itself. They also don't generally talk the the lower level developers, but rather to upper management.

source: We had a couple of these guys at HP.

Still, the thing that is impressive here is that both Microsoft and Sony settled on the same cpu. It isn't like AMD is the only CPU manufacturer in the business, which is what makes these decisions pretty unprecedented.

My point about the sales guys is, they play a minor role in a complex developing project like eg. The ps4. Its not a oem standard deal. You need chief architects and specialist in front because its to nonstandard for a sales person with engineering bacgground. Ofcouse they have a role but its minor.
In context having a organization selling developing soc like the ps4 should be organised and managed differently than say the old amd.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
And even if you doubled it you wouldn't have the gaming performance of a 7870.

When did the baseline for comparison become 7870? I thought it was 7790, which is typically pretty substantially weaker than 7870.

All things considered I don't think Intel could hit 7790 levels in a 350mm^2 CPU that used let's say 125W or less but I don't think they'd necessarily be on a drastically lower level. Maybe 75+% as fast? I don't think XBox One will hit 7790 levels either, seeing how it has 12 CUs instead of 14 and runs at 800MHz instead of 1GHz - that's not even 70% the raw power. So I could see Intel providing something that gives XBox One levels of GPU performance but not PS4. Not especially competitive before even considering price and lack of flexibility, but not a total disaster. I just don't think their top technical capability is as far off as people are saying it is.

I also think a 3.5+GHz 2C4T Haswell solution could hold its own vs 1.6GHz 8C Jaguar, especially in heavily SIMD optimized code.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
I swear, will you guys ever stop quoting this disproven figure for the gt3 gpu, even counting for the 84mm^2 edram die (which even Intel admits is overkill), the total gpu logic should be more like 190mm^2

How does iris pro play battlefield in 1080?
What is the minimum framerates?
Who did review the iris pro besides anand?
Why?
The picture anand paints of iris pro and eg. The new mac pro, is it something that you can sell to professional at eg. sony or ms?
Intel gfx does not stand the real world and b2b market, it relies on marketing and brand and sites like anand that is dependant on delivering new information first.
 

SiliconWars

Platinum Member
Dec 29, 2012
2,346
0
0
I also think a 3.5+GHz 2C4T Haswell solution could hold its own vs 1.6GHz 8C Jaguar, especially in heavily SIMD optimized code.

You obviously wouldn't be able to have the OS use a core for itself however, stuff like that might matter a bit.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
When did the baseline for comparison become 7870? I thought it was 7790, which is typically pretty substantially weaker than 7870.

All things considered I don't think Intel could hit 7790 levels in a 350mm^2 CPU that used let's say 125W or less but I don't think they'd necessarily be on a drastically lower level. Maybe 75+% as fast? I don't think XBox One will hit 7790 levels either, seeing how it has 12 CUs instead of 14 and runs at 800MHz instead of 1GHz - that's not even 70% the raw power. So I could see Intel providing something that gives XBox One levels of GPU performance but not PS4. Not especially competitive before even considering price and lack of flexibility, but not a total disaster. I just don't think their top technical capability is as far off as people are saying it is.

I also think a 3.5+GHz 2C4T Haswell solution could hold its own vs 1.6GHz 8C Jaguar, especially in heavily SIMD optimized code.

Doubling Anandtech's Iris 5200 3DMark11 Performance score doesn't quite reach 7790. A synthetic Intel does very well in compared to actual games. Intel's graphics aren't in the same category as Nvidia and AMD.
 
Last edited:

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
You guys keep repeating that. Could someone give some actual comparisons? It's hard to get anything good from reviews because no one is using the same resolutions or settings that Anand did, but I'm sure something can be approximated. I'm curious about working out theoretical capabilities here, more than making statements about who has good graphics and who doesn't.

Or if you want to look at 3DMark 11 vs games you could look at it this way; in AT's review the Iris 5200 Pro part does about the same as 650M in 3DMark 11 and performs comparatively with 650M in the following benchmarks (for the 55W point, GT650M vs 5200):

Metro Last Light: 1.05x, 1.21x
BioShock Infinite: 1.32x, 1.38x
Sleeping Dogs: 1.07x, 1.36x
Tomb Raider: 1.15x, 1.32x
Battlefield 3: 1.11x, 1.54x, 1.47x
Crysis 3: 1.30x, 1.34x, 1.33x
Crysis Warhead: 0.89x, 0.99x, 1.09x
GRID 2: 1.18x, 1.53x, 0.92x

Things are really all over the place depending on quality, Intel looks held back by AA and AF performance. That could be due to having worse effective bandwidth (even with Crystalwell its peak bandwidth is behind the GT650M tested, and this is assuming it can simultaneously saturate the L4 + main memory which is not normal for something that's supposed to operate as a cache). That's something that's not necessarily an intrinsic problem of the uarch and would have different constraints in a console design.

The average for the lower quality results in games is 1.13x, while for higher quality (or whatever shows 650M best) it's 1.347x.

If the TDP is > 100W you'd probably have some extra GPU clock headroom even after doubling the GPU resources on GT3e, particularly if we're talking strictly dual core CPU here. So they could probably reacher higher than 2x performance of GT3e with a 2x GPU.

So yeah I don't think that 75% 7790 for > 2x GT3e estimate I gave is really that crazy. Whether or not the bigger performance deltas would push it much below that and are an intractable problem I really don't know.
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
You guys keep repeating that. Could someone give some actual comparisons? It's hard to get anything good from reviews because no one is using the same resolutions or settings that Anand did, but I'm sure something can be approximated.

If the TDP is > 100W you'd probably have some extra GPU clock headroom even after doubling the GPU resources on GT3e, particularly if we're talking strictly dual core CPU here. So they could probably reacher higher than 2x performance of GT3e with a 2x GPU.

What review with real gaming benchmarks are you referring to? I cant find any?

Why does this theoretical intel solution have to be this 80eu 2 core weak solution with 3d mark as asessing instrument? And the assumption it scales liniarily? That is if the gfx can run eg. 1500MHz.

I dont find it interesting. What would be interesting is how nv have approached this bid, if at all?

Did charlie give some insight here?
 

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
The GT3e lags behind a GT 640 in Anand's article.
Assuming a GT 640 performance level for comparison's sake and picking info from TPU charts:

A HD 7850 is about 3 times more powerful than a GT 640 at 1080p.
A HD 7790 is more than 2 times more powerful than a GT 640 at 1080p.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
What review with real gaming benchmarks are you referring to? I cant find any?

Why does this theoretical intel solution have to be this 80eu 2 core weak solution with 3d mark as asessing instrument? And the assumption it scales liniarily? That is if the gfx can run eg. 1500MHz.

I dont find it interesting. What would be interesting is how nv have approached this bid, if at all?

Did charlie give some insight here?

Anandtech's 8 gaming tests don't count why? Because they were done by Anand?

The GT3e lags behind a GT 640 in Anand's article.
Assuming a GT 640 performance level for comparison's sake and picking info from TPU charts:

A HD 7850 is about 3 times more powerful than a GT 640 at 1080p.
A HD 7790 is more than 2 times more powerful than a GT 640 at 1080p.

If that's true then if Intel could double GT3e performance it should be able to hit my estimate of 75% HD 7790 performance (which would put it similar to XBox One). We don't know if it's feasible or not within an "acceptable" die size and power budget, but at first glance it looks like it could be.

Are we actually talking about this as if Intel would have offered GT3e as a solution? You'd may as well say AMD tried bidding Kabini...
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
Amd knew they were the only one with the right tech for this generation. Its only in this thread i have read anything about Intel as a viable alternative. That is the perfect platform for a solid profit in every deal.
 
Feb 19, 2009
10,457
10
76
Exophase, its like this, its very simple:

Intel just simply lacks expertise, and its not something they can acquire instantly, it is nurtured and grown over many years, in a highly competitive field that have seen the likes of Matrox, S3, 3dfx, Rendition and many others die because they could not compete. Only the strong survived, now we have ATI/AMD and NV. These two are supreme in their field. Intel is the baby playing in a grown up's playground, if you will.

Intel needs a lot more investment and time to fully grow in graphics.. I am certain they will, they have massive $$ to keep trying (seducing AMD/NV engineers!). When their time comes, it WILL be glorious. In-house production on cutting edge nodes coupled with real expertise = excellent GPUs. You just need to understand their time is not here, yet.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Anandtech's 8 gaming tests don't count why? Because they were done by Anand?



If that's true then if Intel could double GT3e performance it should be able to hit my estimate of 75% HD 7790 performance (which would put it similar to XBox One). We don't know if it's feasible or not within an "acceptable" die size and power budget, but at first glance it looks like it could be.

Are we actually talking about this as if Intel would have offered GT3e as a solution? You'd may as well say AMD tried bidding Kabini...

300+ mm2 for 7790 level performance is "could be feasible" when competing against AMD's proposal?
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
I try to put together some resembling a technical analysis - lots of really rough numbers, I know, but still - and I get a lot of "Intel isn't experienced enough" or "everyone knows they're behind in GPU."

Sorry but that's not what I'm looking for. Please discuss the actual flaws in my reasoning.

Actually, you know what, forget it, I'm done. This is exhausting. There's only so many times I can say that I don't think Intel actually had a competitive deal vs AMD's proposal. I said stuff about hypothetical performance for some given die size because I was curious about seeing how some playing with numbers would turn out, and I said that that my estimates could have been feasible. Nothing about whether or not that's GOOD ENOUGH. I don't see how this is so hard to understand.
 
Last edited:

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
If that's true then if Intel could double GT3e performance it should be able to hit my estimate of 75% HD 7790 performance (which would put it similar to XBox One). We don't know if it's feasible or not within an "acceptable" die size and power budget, but at first glance it looks like it could be.

Taking in mind that Intel is charging $468 and $657 for the SKUs packing a GT3e I'd like to see the insane price tag for a GT3e doubling its txtors.

Consoles were impossible for Intel at this time in this universe.
 
Last edited:
Feb 19, 2009
10,457
10
76
You can't have a technical analysis when you randomly throw numbers out of nowhere. Its not a simple case of adding more EUs. Is the MC any good when its scaled up, keeping all the EUs fed with enough bandwidth? Nobody knows. Once scaled up, is the scheduler going to keep up or bottleneck? Nobody knows. You have a plethora of issues where it takes years and years of expertise to actually design a highly competitive and efficient GPU architecture.

That is why it keeps going back to experience and know how. Intel lacks it. You can guess all you like, that is the reality.

ps. If all you want to know is how much would it take for Intel's iGPU to match a 7790, it needs about 3x the raw performance it has currently, due to inefficiencies of scaling etc. That's why its not feasible for them to offer a custom SoC to MS and Sony, they know they lack the GPU grunt to offer a compelling design.
 
Last edited:

sniffin

Member
Jun 29, 2013
141
22
81
300+ mm2 for 7790 level performance is "could be feasible" when competing against AMD's proposal?

As mentioned ^, even that's assuming doubling the EUs would actually come close to doubling the performance of the chip which it obviously wouldn't. 2xGT3e would be big and it would still suck.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
I try to put together some resembling a technical analysis - lots of really rough numbers, I know, but still - and I get a lot of "Intel isn't experienced enough" or "everyone knows they're behind in GPU."

Sorry but that's not what I'm looking for. Please discuss the actual flaws in my reasoning.

Actually, you know what, forget it, I'm done. This is exhausting. There's only so many times I can say that I don't think Intel actually had a competitive deal vs AMD's proposal. I said stuff about hypothetical performance for some given die size because I was curious about seeing how some playing with numbers would turn out, and I said that that my estimates could have been feasible. Nothing about whether or not that's GOOD ENOUGH. I don't see how this is so hard to understand.

Ah, well I think they could perhaps hit 7790 levels of gaming with mid 300s mm2 if they put a lot of money and engineers on it. Intel is better off refining their EUs for another few years before attempting a project like that, though.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Have you guys even seen the benchmarks for GT3? Intel is obviously focused more on the mobile market rather than pure gaming performance, but they're obviously making some strong strides. As I said -- I don't necessarily think intel was the best choice by default, because AMD has always been more gaming focused and optimized while intel has not.

I have given some Iris Pro benchmarks in #214. Anand benchmarked games at low resolutions such as 900p and 768p. Lower resolutions favoured the cache-based solution, giving advantage over trinity APUs and their slow DDR3.

Increase the resolution and the cache advantage drops easily. Anand tested Metro at 768p and Iris Pro was competitive with 650M. Then increased resolution at 900p and performance dropped by a 15% up to 640 levels. They did not test at 1080p.

Iris Pro at 1080p would perform as Trinity/Richland.
 

beginner99

Diamond Member
Jun 2, 2009
5,315
1,760
136
It performs well for a massive price mark up? Are you kidding?

It's a massive die with a further die for eram.

Exophase, its like this, its very simple:

Intel just simply lacks expertise, and its not something they can acquire instantly, it is nurtured and grown over many years, in a highly competitive field that have seen the likes of Matrox, S3, 3dfx, Rendition and many others die because they could not compete. Only the strong survived, now we have ATI/AMD and NV. These two are supreme in their field. Intel is the baby playing in a grown up's playground, if you will.

Intel needs a lot more investment and time to fully grow in graphics.. I am certain they will, they have massive $$ to keep trying (seducing AMD/NV engineers!). When their time comes, it WILL be glorious. In-house production on cutting edge nodes coupled with real expertise = excellent GPUs. You just need to understand their time is not here, yet.

I don't fully disagree mit GT3e is IMHO pretty good if you ignore the pricing of the whole chip. You have to keep in mind it runs at 55W TDP which includes a quad core CPU. So it is IMHO pretty power efficient (low clocks) at the cost of bigger die area.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
Anandtech's 8 gaming tests don't count why? Because they were done by Anand?



If that's true then if Intel could double GT3e performance it should be able to hit my estimate of 75% HD 7790 performance (which would put it similar to XBox One). We don't know if it's feasible or not within an "acceptable" die size and power budget, but at first glance it looks like it could be.

Are we actually talking about this as if Intel would have offered GT3e as a solution? You'd may as well say AMD tried bidding Kabini...

Yes the gaming test done by anand was a joke. And we both know why we didnt get any 1080 bf3 numbers. Instead one long pr cache talk. I could have used just a little more critical approach.
 

formulav8

Diamond Member
Sep 18, 2000
7,004
522
126
Its all about the price. AMD was simply the company willing to go the lowest. And the result is anything but impressive. A mainstream GPU with an ultra lowend CPU.

Link where that was mentioned by Sony or AMD or someone in the know?


If they are using AMD's APU's I could see that as a probable reason AMD got the contracts. More simple design and 2 pieces of hardware in 1 tiny chip.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
16,637
7,121
136
Yes the gaming test done by anand was a joke. And we both know why we didnt get any 1080 bf3 numbers. Instead one long pr cache talk. I could have used just a little more critical approach.

You know, I still haven't seen a good explanation about the Iris Pro numbers. The 5200 is something like 20-30% faster on compute but has half the fill rate. It may be a moot point since only Apple is using it but it would be useful as a guide as to whether they could bridge the gap in Broadwell, or if they even care to do so.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
62
91
Yes the gaming test done by anand was a joke. And we both know why we didnt get any 1080 bf3 numbers. Instead one long pr cache talk. I could have used just a little more critical approach.

I don't know why you feel the need to bad-mouth Anand like that.

If you really feel he has committed an error in some way then why not just email him or ask him in twitter? He is very responsive and accommodating when it comes to interacting with respectful individuals who contact him through social media.

Making the kinds of character-assassination comments as you have in your post above is just going to look bad for yourself. Go to the horse's mouth and get your answers, sans the bashing, and you just might learn a thing or two above and beyond your preset bias and prejudice. (unless you hold yourself so high that you feel you already know all there is to know...)