• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

The real reasons Microsoft and Sony chose AMD for consoles [F]

Page 26 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Its a mistake to give it low bandwidth DDR3 when consoles have such a long lifespan, it will cripple developers a few years down the road when they have to cater to the lowest common denominator. Here, both consoles have a great opportunity to set the bar as high as possible. MS failed.

We already see current A series APU are bandwidth starved, and the APUs in the xbox is heaps more powerful. Relying on a small amount of esram to offload just adds a layer of complexity that's not a better solution than going with all GDDR5. Not to mention, what happens when developers want to make more complex games, that the 32mb is unable to fully cache? Well, performance would tank hard.. so no developer would do that.. thus, MS is limiting future growth in gaming quality.
 
Console users are loyal but it is a mistake to think they are not connected to what has been going on. The pre-order numbers of the respective consoles are proof that console gamers do pay attention.

Considering the average gamer is ~30 years old and a net geek.. its a mistake if MS thinks gamers are not informed.

All this will filter down to the masses, because lets face it, everyone knows or is a "friend" to at least 1 geek.
 
Its a mistake to give it low bandwidth DDR3 when consoles have such a long lifespan, it will cripple developers a few years down the road when they have to cater to the lowest common denominator. Here, both consoles have a great opportunity to set the bar as high as possible. MS failed.

We already see current A series APU are bandwidth starved, and the APUs in the xbox is heaps more powerful. Relying on a small amount of esram to offload just adds a layer of complexity that's not a better solution than going with all GDDR5. Not to mention, what happens when developers want to make more complex games, that the 32mb is unable to fully cache? Well, performance would tank hard.. so no developer would do that.. thus, MS is limiting future growth in gaming quality.

XB1 ESRAM is simply a half-assed solution which ended up making the SoC with only 2/3 of the shader power and 1/2 of the ROPs (GG to AA performance?) of the PS4 while using an extra 2 billion transistors (5 billion XB1 vs 3 billion PS4). As if the PR disaster wasn't enough already, reliable insiders at NeoGAF has described the XB1 SoC yields as "troubling".
 
I don't understand what the big news in this thread is.

Intel makes more powerful pure CPU's but they have never even tried to compete with AMD integrated graphics because that is not the market Intel is interested.

AMD's product is more in line with the demands of needing to run games on a low-power device and thus that is why they got the console job.
 
Even if it was the market they are intended, they wouldn't have the solution that would do the job.
 
XB1 ESRAM is simply a half-assed solution which ended up making the SoC with only 2/3 of the shader power and 1/2 of the ROPs (GG to AA performance?) of the PS4 while using an extra 2 billion transistors (5 billion XB1 vs 3 billion PS4). As if the PR disaster wasn't enough already, reliable insiders at NeoGAF has described the XB1 SoC yields as "troubling".

You brought up a good point, I know it was missing a few CUs, but 1/2 the ROPs that's just plain stupid from MS.

Only MS would go for a more expensive and inferior solution.
 
I don't understand what the big news in this thread is.

Intel makes more powerful pure CPU's but they have never even tried to compete with AMD integrated graphics because that is not the market Intel is interested.

AMD's product is more in line with the demands of needing to run games on a low-power device and thus that is why they got the console job.

Rumor was MS wanted a minimum 8GB of ram during planning stage in 2010 because there were going for an all-in-one do everything media box instead of a dedicated gaming console, and only DDR3 was dense enough to reach 8GB. Which begs the question of what horribly unoptimized software they were going to run that requires 8GB for a freaking console. 4K Doritos and Mountain Dew ads? NSA spying algorithms? WHO KNOWS
 
Last edited:
Rumor was MS wanted a minimum 8GB of ram during planning stage in 2010 because there were going for an all-in-one do everything media box instead of a dedicated gaming console, and only DDR3 was dense enough to reach 8GB. Which begs the question of what horribly unoptimized software they were going to run that requires 8GB for a freaking console. 4K Doritos and Mountain Dew ads? NSA spying algorithms? WHO KNOWS

Uhm, 8GB is the only thing that makes sense for something thats going to last as long as this generation. If you look at the combined memory usage of newer pc games (system + vram), 8GB is an absolute minimum for a console generation starting now and is supposed to play the games of 2020.

I do agree that MS probably wants to run an unnecessary amount of background services, but let them. Just demand that they add more RAM on top of the 8GB. With today's memory prices and DDR3 being even cheaper than GDDR5, that's a commitment to their own software and longevity they would be stupid to ignore.
 
Cloud rendering is a total joke. People complaint about 8-16ms frame latency and now they want to throw down the internet as the intermediate "middle man" in the rendering of games? Another useless MS stunt.
 
The thing is, forcing a weird peripheral like this down consumers' throat is the only way to make it actually useful. If developers know that 100% of XBone owners have a Kinect, then they can use Kinect features without the fear that they will limit themselves to a fraction of the XBone market.

I see your logic. However for me that is a reason more not to get it. I don't wanna speak and make weird movement sin front of my tv when gaming. Gaming for me is chilling on my couch with a game pad in my hand and pressing buttons.

Wii sports was like fun for 1 h and then it was nice idea but...
Other games like FPS were unplayable due to the controller. Maybe ok for noob gamers but if you play FPS on PC it's just plain frustrating because you can't aim like shit and everything is random (aim help...). Just gross...even FPS with analog stick is well, mediocre at best.
 
gddr5 has its base frequency multiplied x4 so unless the show ~1350MHz base clock then your assertion is not accurate, even amd uses a base clock a quarter of that for the gddr5 speed http://www.amd.com/us/products/desktop/graphics/7000/7870/Pages/radeon-7870.aspx#3

5.5Ghz GDDR5 got a CK clock of 1.375 Ghz and a WCK clock of 2.75Ghz.

Example:
gddr5.png
 
Last edited:
5.5Ghz GDDR5 got a CK clock of 1.375 Ghz and a WCK clock of 2.75Ghz.

learnt something new which is great, but everybody either advertises efective clock[MOAR CLOCKS] or base clock...either way, both approaches to cpu performance are interesting, tonnes of weak cores versus a few strong ones, which is better for optimized gaming[not legacy gaming]?
 
"In the system".

Might refer to the GDDR5 or WiFi.

The CPU will not run at 2.75Ghz. Unless they radically change TDP. If it even can run at that speed.

I believe Kabini/Temash are made on TSMC's low power SoC process. The APU will most certainly not be built on that same process, so the clock speeds may very well be a good bit higher
 
learnt something new which is great, but everybody either advertises efective clock[MOAR CLOCKS] or base clock...either way, both approaches to cpu performance are interesting, tonnes of weak cores versus a few strong ones, which is better for optimized gaming[not legacy gaming]?

Few strong 😉

Easier to code for, easier to extract performance from due to lower scaling penalties. Some code simply cant scale.
 
Few strong 😉

Easier to code for, easier to extract performance from due to lower scaling penalties. Some code simply cant scale.

If all else is held equal, certainly- a theoretical 16GHz Jaguar core would be much easier to work with than 8 2GHz ones. But it all depends on what trade offs they make in return- it could be that 8 Jaguar cores took up less die space and used less of the limited TDP budget than (say) 4 Steamroller cores. *shrug* Since both MS and Sony went with the 8 core solution, it must have given a good trade off somewhere. (Probably in power consumption- we all know that AMD's big cores are hardly competitive on that front right now...)
 
Absolutely.

Cost, yield, binning, power, cooling etc.

Also the CPU, sofar atleast, only runs at 1.6Ghz.

One big mistakes for both consoles seems to be lack of MCM design. Specially the Xbox one is feeling that with its design decision.
 
One big mistakes for both consoles seems to be lack of MCM design. Specially the Xbox one is feeling that with its design decision.

Wouldn't MCM have complicated things even further? We're talking about an APU here, not a couple of Xeons bolted together.

http://vr-zone.com/articles/sony-ps...ing-2-75-ghz-max-core-clock-listed/45606.html

Granted it is a dev kit, but it's also likely the Jaguar cores are limited by process (TSMC low power) and not architecturally
 
Last edited:
learnt something new which is great, but everybody either advertises efective clock[MOAR CLOCKS] or base clock...either way, both approaches to cpu performance are interesting, tonnes of weak cores versus a few strong ones, which is better for optimized gaming[not legacy gaming]?

For many game developers few strong cores is better because are more easy to program. However, there is plenty of efficient multithreaded game engines now in the market, therefore game developers can chose one of those engines and work with it.

The question is different for users. A multithreaded game with each thread locked to one core can provide better gameplay, because eliminates the bottlenecks that can arise in few stronger cores system in some scenes/situations.
 
Last edited:
Back
Top