The real reasons Microsoft and Sony chose AMD for consoles [F]

Page 19 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SiliconWars

Platinum Member
Dec 29, 2012
2,346
0
0
Well AMD reckons Jaguar is about 1W per core at 1GHz going to 2W per core at 1.4 GHz -

Screen%20Shot%202013-05-22%20at%2011.55.13%20PM_575px.png
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
On 7-CPU, the A4-5000 wins by approx. 50-60% with 4 threads. That is at a lower clockspeed for the A4 as well. Extrapolate to 8 threads, easy conclusion: Jaguar is immensely faster.

Yeah, because the Exynos 5250 is only two cores and A4-5000 is four. No kidding. What, did you think that it had four cores but awful scaling past two threads vs everything else there that doesn't? Maybe try looking at the two thread tests between the two.

On Geekbench: Bobcat wins in every single-threaded test at 1.6GHz. Jaguar adds between 10-20% IPC boost from Bobcat. Extrapolate to 8 threads, ARM again loses by a large margin.

Put your glasses on and look at it again. The Cortex-A15 wins in single threaded image compress, image decompress, Lua, primality, sharpen image, blur image and write sequential.

Now, I said Bobcat had slightly better perf/MHz (I really didn't do an average here). Jaguar adds 10-20% to that. Which is why I said that a 2.5GHz Cortex-A15 should be competitive against a 2GHz Jaguar.

Cut the crap about Jaguar having 8 cores. Cortex-A15 would have 8 cores in this scenario. Stop arguing against that.

Desktop, server, game console, fax machine, call it whatever you want, ARM has never made a processor which is at the performance level that an 8C Jaguar is expected to be at.

They make IP that can be configured for a bunch of different purposes, and yes, it can be configured to a similar performance level vs 8C Jaguars, especially the 1.6GHz one in XBox One. All this stuff about what does or doesn't exist in the market is not needed in a technical comparison.

So, what you're saying is that ARM could make a processor with worse power consumption, to compete against a processor that already has better power consumption. Lol, that's excellent logic. They're definitely going to make their situation BETTER with that.

I said that one could make a Cortex-A15 console SoC that'd have competitive performance.

How people confuse this with an argument that this should have been chosen is beyond me. And the reason I said it is because people like you say crap about how you'd be stuck with 4 cores max or how the single threaded performance would be stuck way behind the consoles.

You keep talking power consumption but you don't have the TSMC 28nm HPM or HP (whatever Kabini is made on, almost certainly not LP) Cortex-A15 power-over-leakage optimized numbers to compare with, so there's really no point.

Well AMD reckons Jaguar is about 1W per core at 1GHz going to 2W per core at 1.4 GHz

Sounds about right to me.

In a perf/W sense I see no big challenge in Cortex-A15 keeping up with that when on a similar process.
 
Last edited:

sushiwarrior

Senior member
Mar 17, 2010
738
0
71
Yeah, because the Exynos 5250 is only two cores and A4-5000 is four. No kidding. What, did you think that it had four cores but awful scaling past two threads vs everything else there that doesn't? Maybe try looking at the two thread tests between the two.

My bad, why would they label 4 threads on a 2c processor :confused:

You're right, noticing a trend of poor performance on the last benchmarks of each run - possible throttling issues?

Put your glasses on and look at it again. The Cortex-A15 wins in single threaded image compress, image decompress, Lua, primality, sharpen image, blur image and write sequential.

Now, I said Bobcat had slightly better perf/MHz (I really didn't do an average here). Jaguar adds 10-20% to that. Which is why I said that a 2.5GHz Cortex-A15 should be competitive against a 2GHz Jaguar.

You're right, noticing a trend of poor performance on the last benchmarks of each run - possible throttling issues?

Well sure, by using Geekbench as the be-all benchmark between the two :confused: other benchmarks say that Jaguar is significantly faster than A15, so which is to be believed.

Cut the crap about Jaguar having 8 cores. Cortex-A15 would have 8 cores in this scenario. Stop arguing against that.

Yes, that's what I meant - multiplying the cores multiplies the performance difference. With both @ 8 cores, the lead for Jaguar just gets bigger.


They make IP that can be configured for a bunch of different purposes, and yes, it can be configured to a similar performance level vs 8C Jaguars, especially the 1.6GHz one in XBox One. All this stuff about what does or doesn't exist in the market is not needed in a technical comparison.

Making IP with similar performance level =/= making IP that is remotely better than the competition at any aspect. Sure, Intel COULD make Slice beat a Titan, but it'd just take tons of die space, power, and money...

I said that one could make a Cortex-A15 console SoC that'd have competitive performance.

How people confuse this with an argument that this should have been chosen is beyond me. And the reason I said it is because people like you say crap about how you'd be stuck with 4 cores max or how the single threaded performance would be stuck way behind the consoles.

Now it's just competitive performance? Well, competitive performance doesn't matter when you are no longer competitive on price, power consumption, size, or graphics.

You keep talking power consumption but you don't have the TSMC 28nm HPM or HP (whatever Kabini is made on, almost certainly not LP) Cortex-A15 power-over-leakage optimized numbers to compare with, so there's really no point.

You keep talking power consumption yet you don't have ARM on your high-speed process to compare with, so there's really no point.

See how easy it is to place all of the burden of proof on the other guy?

In a perf/W sense I see no big challenge in Cortex-A15 keeping up with that when on a similar process.

Blah blah, AMD would be better than Intel if they had their process, 3dfx would beat a Titan, yadda yadda yadda. Last I checked, we were discussing CPU designs here, not processes :rolleyes:
 

SiliconWars

Platinum Member
Dec 29, 2012
2,346
0
0
Yes, that's what I meant - multiplying the cores multiplies the performance difference. With both @ 8 cores, the lead for Jaguar just gets bigger.

No it'll stay the same everything else being equal.

Exophase is mostly right, there isn't a lot between the cores. Jaguar's perf/W is being held back by x86.
 

sushiwarrior

Senior member
Mar 17, 2010
738
0
71
No it'll stay the same everything else being equal.

More cores just multiplies the difference between cores... but yes, from a percentage standpoint they stay the same.

Exophase is mostly right, there isn't a lot between the cores. Jaguar's perf/W is being held back by x86.

x86/ARM makes little difference, ISA isn't as important in power consumption/performance as actual core design/manufacturing process is. ARM saves space in the front end, but not necessarily power consumption.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
My bad, why would they label 4 threads on a 2c processor :confused:

Because they ran the benchmark with 4 threads. It has nothing to do with the hardware they ran it on. You can see sometimes they get a performance advantage running more threads than hardware threads, most likely that means something somewhere is blocking on something with the OS.

You're right, noticing a trend of poor performance on the last benchmarks of each run - possible throttling issues?

Don't know what you're referring to. Exynos 5250 in the configuration tested has no throttling (it's running on a Linux distro).

Well sure, by using Geekbench as the be-all benchmark between the two :confused: other benchmarks say that Jaguar is significantly faster than A15, so which is to be believed.

Geekbench and 7-Cpu. Maybe we'll see something with Phoronix? Do you have a Kabini with Linux? Do you want to run the suite?

Making IP with similar performance level =/= making IP that is remotely better than the competition at any aspect. Sure, Intel COULD make Slice beat a Titan, but it'd just take tons of die space, power, and money...

Do I have to remind you again of exactly what I was refuting in this thread?

Here, let's just revisit what I said:

8-core (2x4 cluster) Cortex-A15 @ 2.5GHz is perfectly implementable on 28nm SoCs today, and it would run within a power budget that's suitable for consoles. That would have been performance competitive with Jaguar (with equal quality code of course). Performance was not the problem.
8-core Cortex-A15 @ 2.5GHz is feasible. It'd run within a power budget suitable for consoles. It's performance competitive with Jaguar.

Now I happen to think that Cortex-A15 on the same 28HP(M?) process that Kabini is probably on could compete well in perf/W at peak perf (and for consoles the rest of the perf/W curve doesn't really matter). And you won't see it on Samsung's 28nm Exynos Octa (much less 32nm Exynos 5250, which is all we have power numbers for) because it optimizes for the lower part of that perf/W core. But quite frankly, the numbers seem to show it's in a similar perf/W ballpark with Jaguar even there.

But let's say that I give you the full benefit of the doubt, let's say to get similar performance Cortex-A15 has to use much more power to do so. How much more power? 5W? 10W? Would it completely break a reasonable power budget of the console? Probably not. Now tell me what was wrong with what I said.

Hopefully you won't start talking about perf/mm^2 now...

You keep talking power consumption yet you don't have ARM on your high-speed process to compare with, so there's really no point.

See how easy it is to place all of the burden of proof on the other guy?

Actually I think all the burden of proof IS on you because you said that ARM doesn't have the ability to provide a high enough performance CPU for a console, specifically you said:

Seriously, ARM is amazing for tablets and phones, but it really ain't anywhere CLOSE to desktop performance. The best Nvidia can do is power a handheld Shield. They don't have the ability to make a medium/high end SoC. Only Intel and AMD have that ability, and only AMD can deliver the graphics performance needed for games.
Now that I look at it it's especially odd that you say that only AMD can deliver the graphics performance, are you really saying nVidia wouldn't have the GPU for this either? Or did you mean of AMD and Intel?

Blah blah, AMD would be better than Intel if they had their process, 3dfx would beat a Titan, yadda yadda yadda. Last I checked, we were discussing CPU designs here, not processes :rolleyes:

I'm asking you to do a comparison with the same openly available processes, the same that would be used for the consoles regardless of the CPU chosen. This is nothing like Intel's process advantage. But yes, process does have a bearing on performance and peak perf/W..
 

sushiwarrior

Senior member
Mar 17, 2010
738
0
71
Don't know what you're referring to. Exynos 5250 in the configuration tested has no throttling (it's running on a Linux distro).

Geekbench and 7-Cpu. Maybe we'll see something with Phoronix? Do you have a Kabini with Linux? Do you want to run the suite?

The Bobcat was dropping off, that's what I was referring to.

Honestly, synthetic benchmarks are usually very dependent on certain quirks of architectures. I'd rather see game benchmarks (this is a game console after all), and 3dMark gives a hint at what that story is like.

Do I have to remind you again of exactly what I was refuting in this thread?

Here, let's just revisit what I said:

8-core Cortex-A15 @ 2.5GHz is feasible. It'd run within a power budget suitable for consoles. It's performance competitive with Jaguar.

Now I happen to think that Cortex-A15 on the same 28HP(M?) process that Kabini is probably on could compete well in perf/W at peak perf (and for consoles the rest of the perf/W curve doesn't really matter). And you won't see it on Samsung's 28nm Exynos Octa (much less 32nm Exynos 5250, which is all we have power numbers for) because it optimizes for the lower part of that perf/W core. But quite frankly, the numbers seem to show it's in a similar perf/W ballpark with Jaguar even there.

8-core A15 at 2.5GHz is hypothetical, assuming they can make one does not leave any guarantees about efficiency or performance.

Idle power matters a lot, I leave mine sitting on the dashboard for long periods of time. I want it to sip power all the time, not just during games. But that's just IMO. Kabini appears to deliver solid idle and load performance. Is that the process, or the design of the core? I think it's the design. Otherwise, according to how you think the processes work, one would have to be sacrificed.

But let's say that I give you the full benefit of the doubt, let's say to get similar performance Cortex-A15 has to use much more power to do so. How much more power? 5W? 10W? Would it completely break a reasonable power budget of the console? Probably not. Now tell me what was wrong with what I said.

Hopefully you won't start talking about perf/mm^2 now...

Doesn't matter how much more power. It's worse, it takes more $$$ to design and make, it isn't proven, and in general a Jaguar APU is simply a perfect fit in every way.

Even if ARM can make it happen, they lose to AMD in essentially every aspect, and I don't think they can undercut on price.

Actually I think all the burden of proof IS on you because you said that ARM doesn't have the ability to provide a high enough performance CPU for a console, specifically you said:

I was just stating the obvious. If ARM is able to deliver an excellent mid-range CPU, why are there no ARM laptops? Why are there no ARM desktops? If they could deliver those, I'm sure they would. They haven't even managed a 64-bit design yet.

Now that I look at it it's especially odd that you say that only AMD can deliver the graphics performance, are you really saying nVidia wouldn't have the GPU for this either? Or did you mean of AMD and Intel?

I meant between AMD and Intel, I'm just repeating what OP said.

I'm asking you to do a comparison with the same openly available processes, the same that would be used for the consoles regardless of the CPU chosen. This is nothing like Intel's process advantage. But yes, process does have a bearing on performance and peak perf/W..

You can't really just "compare" between processes. There's so many little details that make it much more difficult. For one thing, you can't really make comparable designs at 2 different gate lengths - the actual layout of the design has to physically change. It's not just "scale down 2/3, make the exact same thing". There's more to it than that. Hence comparing across processes is a bit of a crapshoot.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
The Bobcat was dropping off, that's what I was referring to.

Honestly, synthetic benchmarks are usually very dependent on certain quirks of architectures. I'd rather see game benchmarks (this is a game console after all), and 3dMark gives a hint at what that story is like.

7-Cpu isn't a synthetic benchmark, it's 7zip. Geekbench is synthetic but as far as mobile benches go it's better than most, which is still saying very little.

8-core A15 at 2.5GHz is hypothetical, assuming they can make one does not leave any guarantees about efficiency or performance.

They can make one. There's nothing especially hypothetical about the performance, it'll scale about as you expect it would.

Idle power matters a lot, I leave mine sitting on the dashboard for long periods of time. I want it to sip power all the time, not just during games. But that's just IMO. Kabini appears to deliver solid idle and load performance. Is that the process, or the design of the core? I think it's the design. Otherwise, according to how you think the processes work, one would have to be sacrificed.

If you're not using the console at all it should be greatly power gated. Sony went so far as to turn the whole APU off and let another support chip handle some background stuff.

Idle power optimization to the level that's live or die on phones or even laptops is not important on consoles. That's the difference we're talking about. It looks to me like Kabini's idle performance is much worse than a phone SoC's, and I'm sure process targets account for a lot of that. It's also perfectly fine for a console, but you won't see it in phones any time soon.

Even if ARM can make it happen, they lose to AMD in essentially every aspect, and I don't think they can undercut on price.

How many times do I have to say I'm not entering this argument you so badly want to have with me? I'm just going to ignore comments like this from now on.

I was just stating the obvious. If ARM is able to deliver an excellent mid-range CPU, why are there no ARM laptops? Why are there no ARM desktops? If they could deliver those, I'm sure they would. They haven't even managed a 64-bit design yet.

Because people want x86 compatibility on their Windows laptops and desktops, for one thing. How's that for stating the obvious?

You can't really just "compare" between processes. There's so many little details that make it much more difficult. For one thing, you can't really make comparable designs at 2 different gate lengths - the actual layout of the design has to physically change. It's not just "scale down 2/3, make the exact same thing". There's more to it than that. Hence comparing across processes is a bit of a crapshoot.

It's nice that you finally see what I've been saying all along. If only that stopped you you from extrapolating Exynos 5250 power numbers to what they'd be like on a process more suitable for consoles. I was saying that the more suitable comparison would be using the SAME process, not between different processes. Which of course we can't do right now since we don't have a Cortex-A15 implementation that uses the same process and has similar design targets as Kabini (or the consoles which could be different still). But I stand by what I said - Cortex-A15 could deliver comparable performance and could do it within a power budget that's reasonable for a console design.
 
Last edited:

sushiwarrior

Senior member
Mar 17, 2010
738
0
71
Geekbench is definitely the best for mobile from what I've seen.

Again, "they can make one" =/= "they can put one in several hundred million consoles, which are shipping very soon". Scaling is not guaranteed, it's very logical that they could run into serious cache issues or something along those lines.

Yes, I see what you mean by the difference between mobile idle power and power gating idle power.

x86 compatibility and lack of 64 bit, yes. Also the inherent disadvantage of ARM at high performance applications (very limited ISA compared to x86, not nearly as many advanced instructions). But that's more or less besides the point - I've never seen a product with an ARM CPU above 2GHz, have you? Sure, they can make test samples that run 2.5, that doesn't mean that kind of binning is feasible for mass production.

There's so many factors in power consumption that yes, it is pointless to extrapolate to consoles. So I'll instead tell you this - Jaguar has excellent power consumption. Why bother with anything else. And why assume that scaling a core past its designed purpose and target will achieve excellent power consumption numbers somehow...
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
There's no possibility for having cache problems with 8 core A15. I don't think you understand the design. They've already made 8 core dual-cluster Cortex-A15 + Cortex-A7 SoCs. You can do exactly the same with two A15 clusters. They have separate caches. They're kept coherent through the interconnect. It's very similar to what AMD did with Jaguar on the consoles, in fact.

ARM doesn't have an ISA performance advantage vs x86 except for lack of 256-bit vectors.. but this only applies for CPUs that actually have 256-bit vector units. Using AVX256 on Jaguar is a bad idea (AVX128 is a good idea, for the three operand arithmetic). If you have some other idea of advanced instructions for high performance on x86 that ARM lacks by all means throw them at me, I've done a lot of NEON programming and have a pretty good idea of what it can do that even AVX can't. Same thing for the scalar instruction set. There are pros and cons on both sides.

The whole "I've never seen an ARM CPU above 2GHz!" argument is long past old and seriously, it's a bad argument. I'm not going to refute it anymore. And I don't have to provide an argument for why they should bother with anything but Jaguar because I never said they should.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
It's not an argument, we probably won't see 2.5GHz A15 until 20nm. AFAIK no one delivered 2GHz A9 on 40nm even though ARM cited that as the high end for 40nm clocks. Instead we see A9 SoCs nearing that speed on 28nm.

Why pick the option with less features, no 64 bit and less extension support, that you will also have to push to its limits on current process nodes?
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
There are some bizarre arguments going on from some of you. Sony and Microsoft can't afford to live in a "what if" and "maybe this" and "if an ARM processor had this core count and at this speed on this process with this non existent GPU" world.

AMD has the best overall package available by far, no one else is even in the same realm.
 

lamedude

Golden Member
Jan 14, 2011
1,224
56
91
Why doesn't MIPS get any love in this heated debate? Its like the original 64bit ISA and Android apps run fine on it despite what that analyst says.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
62
91
There are some bizarre arguments going on from some of you. Sony and Microsoft can't afford to live in a "what if" and "maybe this" and "if an ARM processor had this core count and at this speed on this process with this non existent GPU" world.

AMD has the best overall package available by far, no one else is even in the same realm.

Best available and best within a commercially relevant timeframe as far as either the Xbox One or the Playstation 4 are concerned.

It is pretty clear that AMD came out on top here, as did MS and Sony along with their customers.

Nvidia would not have felt it necessary to develop Shield if they felt the market was beneath them or irrelevant, but they got squeezed out and they are desperate...desperate enough to fund the development of an experimental new handheld console the likes of Shield.

Nvidia's reaction to AMD's success is just more proof and vindication that AMD is doing something right here, and that Nvidia failed to open a can of whoopass on the appropriate competitor. (so focused on Intel that they forgot to mind their own backyard...whoops indeed :D)

However, the way things are going at Team Nintendo, the WiiU successor may well end up using Bay Trail or Denver for all the sense that Nintendo's roadmap makes [/insert crazy screwy eyes]
 

NTMBK

Lifer
Nov 14, 2011
10,433
5,771
136
However, the way things are going at Team Nintendo, the WiiU successor may well end up using Bay Trail or Denver for all the sense that Nintendo's roadmap makes [/insert crazy screwy eyes]

Nah, they're just going to keep asking IBM to shrink the PowerPC 750 until they can fit a dozen of them on a single die :p
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,058
3,870
136
Nah, they're just going to keep asking IBM to shrink the PowerPC 750 until they can fit a dozen of them on a single die :p

If nintendo had brains( I hope they do, but dont expect it) they would give up on Wii-U, do a custom AMD Kaveri/next interation @ 20nm with more CU's/MTU's/ROP's and the best memory type available in that time frame (wide I/O???). Target in the same watt range (100-150). That should give them a decent perf advantage somewhere around the same costs, but more importantly they can just piggy back off all the other HSA based work devs have been doing on PS4/X1 for the two years that wii-u will be floppying around like a stunned mullet waiting for 20nm to be cost effective.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
62
91
If nintendo had brains( I hope they do, but dont expect it) they would give up on Wii-U, do a custom AMD Kaveri/next interation @ 20nm with more CU's/MTU's/ROP's and the best memory type available in that time frame (wide I/O???). Target in the same watt range (100-150). That should give them a decent perf advantage somewhere around the same costs, but more importantly they can just piggy back off all the other HSA based work devs have been doing on PS4/X1 for the two years that wii-u will be floppying around like a stunned mullet waiting for 20nm to be cost effective.

Seriously, given the existence of the "semi-custom" silicon that is the APU which both MS and Sony are using in their consoles, just how much cost could it possibly represent for Nintendo to phone up an AMD sales rep and say "hey, you know that chip you already designed and are already producing for Sony and Microsoft, yeah how about you produce 10-20% more and sell them to us too?"

At this point with AMD becoming the defacto-standard for "me too" features on a console, what excuse does Nintendo have for not at least having the bare minimum of me-too performance capability?
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,058
3,870
136
Seriously, given the existence of the "semi-custom" silicon that is the APU which both MS and Sony are using in their consoles, just how much cost could it possibly represent for Nintendo to phone up an AMD sales rep and say "hey, you know that chip you already designed and are already producing for Sony and Microsoft, yeah how about you produce 10-20% more and sell them to us too?"

At this point with AMD becoming the defacto-standard for "me too" features on a console, what excuse does Nintendo have for not at least having the bare minimum of me-too performance capability?


that's what im getting at, but they have missed the boat now. Best point to hop back on the boat to me is 20nm and given that time frame you have to have something to justify people buying your console. The funny thing is form what i have seen (die shots) the Wii-U chip looks like a pretty good bit of engineering they just got the entire consoles target so very wrong.
 

NTMBK

Lifer
Nov 14, 2011
10,433
5,771
136
Ditching the Wii U seems pretty unlikely to me.

They make a lot of games themselves, which retain a high selling price for years afterwards (try buying Donkey Kong Country Returns used, and compare that to prices for the Halo game from the same year). They tend to turn a profit on the hardware (unsurprising giving how "underpowered" it is). They may not reach the same heights as they did during the days of the Wii, and they may well sell less than the PS4, but I still expect it to be a profitable venture for them.

Ditching the Wii U and rapidly replacing it with a new home console would hurt them much more, frankly. Look at what happened to Sega in the 90s, with the debacle of the 32X, Saturn and Dreamcast. They brought out too much hardware too rapidly (3 consoles in 5 years) and damaged their brand, making gamers lose trust in them and dooming the fundamentally solid Dreamcast.

Nintendo should stick to their guns and wait this generation out- the Wii U and 3DS will both turn a decent profit for Nintendo, and Wii U sales will probably pick up as more classic Nintendo franchises launch on it.
 

insertcarehere

Senior member
Jan 17, 2013
712
701
136
There are some bizarre arguments going on from some of you. Sony and Microsoft can't afford to live in a "what if" and "maybe this" and "if an ARM processor had this core count and at this speed on this process with this non existent GPU" world.

AMD has the best overall package available by far, no one else is even in the same realm.

If these consoles were released holiday 2014, this would have been a very different story.

I've never seen a product with an ARM CPU above 2GHz

Snapdragon 800 is certainly above 2ghz on a phone(-ish) form factor, there is no reason to believe why the ARM A15, which is very similar, can't reach these speeds in a hypothetical console form factor
 
Last edited:

galego

Golden Member
Apr 10, 2013
1,091
0
0
Best available and best within a commercially relevant timeframe as far as either the Xbox One or the Playstation 4 are concerned.

It is pretty clear that AMD came out on top here, as did MS and Sony along with their customers.

Nvidia would not have felt it necessary to develop Shield if they felt the market was beneath them or irrelevant, but they got squeezed out and they are desperate...desperate enough to fund the development of an experimental new handheld console the likes of Shield.

Nvidia's reaction to AMD's success is just more proof and vindication that AMD is doing something right here, and that Nvidia failed to open a can of whoopass on the appropriate competitor. (so focused on Intel that they forgot to mind their own backyard...whoops indeed :D)

This was exactly my point. Agree completely. Not only seeing to Nvidia desperately rebating the Shield to gain market against the PS4/Xbox1 is funny, but it is also funny to hear rumours that Nvidia is desperately trying to win the contract for the remaining console: the future Steam Box.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
So, nVidia is desperate?
Really guys, you're writing nonsense. Desperate is only a company which have no products on the market people want. Which lost more than 30% revenue Y-Y and need to pay for not ordering wafers from their supplier.

I would not call a company desperate which had a record year and record gross margins in their FQ1.
 

Aikouka

Lifer
Nov 27, 2001
30,383
912
126
Nvidia would not have felt it necessary to develop Shield if they felt the market was beneath them or irrelevant, but they got squeezed out and they are desperate...desperate enough to fund the development of an experimental new handheld console the likes of Shield.

If I had to guess, I would say that NVIDIA was willing to back the Shield because it wasn't that big of a risk. It also helps push Tegra 4, which doesn't have nearly the initial penetration that Tegra 3 did. Arguably, I would be interested to see if they take any of the Shield research and shift it into other possibilities. For example, why not create a set-top (i.e. like an AppleTV) device that can also stream from the computer? If it comes in at a low price point ($100-150), people might be more willing to pick that up and it arguably requires very little development from them given it's a Shield without the integrated bits (screen and controller).
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
After Sonys ps3 evaluation, shouting we need less complexity, the deal wa effectively in amd hands.
And by all means think of it like topmanagement at Sony probably did. Yes we dont need any worries. The x86 solution on top of hsa and gcn is just reducing risk big time, letting the focus and cost go to games and market development. You spare tons of highly valuable top management time.
I find it a no brainer. And then a weak amd, what is not to like :)