Digital Foundry: all questioned AAA developers recommend AMD CPU's for gaming PC's

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Sleepingforest

Platinum Member
Nov 18, 2012
2,375
0
76
I feel like you are being deliberately obtuse. I have said that this is purely speculation based on the evidence at hand, which is that slide. Admittedly, we see that there are octocore Jaguar chips, and thus the clockspeed might be wrong. But evidence points towards the Jaguar architecture not scaling well past 2GHz.

Additionally, the number of cores does not increase TDP linearly. For example, the A10-5700 (non-K) and the A6-5400K have the same TDP (65W) despite the A10 having twice the cores.

I have no reference because it is speculation. As I have said, you are free to disbelieve.
 

SPBHM

Diamond Member
Sep 12, 2012
5,066
418
126
Are not Cinebench and Sandra two benchmarks optimized for intel chips?

the link I provided have many other tests, and keep in mind, when I'm using some software I don't care what compiler is used or what do you think it's optimized for, the resulting performance is the most important.

anyway, my post was just showing that a single SB/IB core is not just (a little bit) faster, the difference can be huge, so having half the cores is not as bad as one might think it can be, even without HT and on heavy MT.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Now you need to answer two questions instead of one

He doesn't "need" to answer anything. As a matter of fact, considering your attitude towards just about everyone else on this forum I'm surprised anyone bothers to take you seriously anymore.

When are you going to answer Gus about what kind of system you run? I think he has asked you four times.
 
Last edited:

Dresdenboy

Golden Member
Jul 28, 2003
1,730
554
136
citavia.blog.de
Are not Cinebench and Sandra two benchmarks optimized for intel chips?
I have no problem with that, since it shows that code optimization for a specific platform still pays off.

But there is another problem with those tests: except for Dhrystone ALU they are all hitting the FPU hard, which is a shared one in BD. According to the notes it seems they used just one module. So the BD Sandra results should be compared to SB with 1 core + HT.
http://www.xbitlabs.com/articles/cpu/display/amd-fx-8150_8.html#sect1
It will be interesting to see, how the removed decoder bottleneck and the better L1 I$ of SR will fare in this test.

As we know, Jaguar cores have their dedicated 128b FADD+FMUL FPU each. Interestingly with a nice 2 cycle latency for SSE MUL ops (1 cycle throughput). This means FP code in PS4 games will see twice the single precision FP throughput per clock as seen in a BD/PD module FPU as long as FMA is not allowed:
8 x 2 x 128 bit vs. 4 x 2 x 128 bit.
The short 2 or 3 cycle latencies might even help in more complex/less pipelineable or reg starving FP code.
Thanks to 2x the clock frequency PD will have only a similar throughput w/o FMA, at higher power consumption levels.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Additionally, the number of cores does not increase TDP linearly. For example, the A10-5700 (non-K) and the A6-5400K have the same TDP (65W) despite the A10 having twice the cores.

Who said you that the number of cores increases TDP linearly? Nobody in this thread said so.

And the TDPs of the A10 and A6 include the GPU part. Moreover, there are other differences such as the A6 having a higher clock.
 

Sleepingforest

Platinum Member
Nov 18, 2012
2,375
0
76
Who said you that the number of cores increases TDP linearly? Nobody in this thread said so.

And the TDPs of the A10 and A6 include the GPU part. Moreover, there are other differences such as the A6 having a higher clock.

Seriously. How can you not get the point? I was giving an example where adding cores DOES NOT increase TDP. You claim this:
The TPD of the four-core Richlands APUs @ 3.5 is of 35 W whereas the the TPD of a four-core Jaguar @ 2.0 is of 25W. Eight core jaguar will have a higher TDP...
The word linear was unnecessary on my part--I am merely trying to say that more cores != more TDP. If you tried to actually understand what I am saying rather than hunting for a flaw, even simple semantic ones, you would have known what I was trying to say.

I feel like you are just disagreeing with me for the sake of disagreement. The original point of this discussion was this comment:
PS4 is going to use eight Jaguar cores, right? Jaguar isn't meant to be a workhorse, it is meant to sip power and be efficient. So for developers to make next gen games, I think they'll have no choice but to create better multithread game engines... they won't be able to use one or two cores and call it a day. So I can see how i7 and FX8xxx may age better than other CPU's, but I think that's still a while off.
To which you replied:
The design of the PS4 was developers driven. Sony asked to game developers if they would prefer a 4-core or a 8-core. The 8-core design on the PS4 was developers choice.
Enigmoid replied:
Yes, When you ask the devs, "Hey do you want more cpu power?" They are really going to say, "Nope ".

4 core jaguar at 1.6 ghz would be a really bad bottleneck.
To get back on topic: Jaguar is a low power chip. Therefore, developers ask for more cores rather than more GHz, as most low power chips scale badly with GHz. look at ARM, Cortex cores, even Intel's Netburst and so on to see how much power usage increases with GHz. This, in the long term, should provide benefits to PC users with many threads.
 
Last edited:

galego

Golden Member
Apr 10, 2013
1,091
0
0
the link I provided have many other tests, and keep in mind, when I'm using some software I don't care what compiler is used or what do you think it's optimized for, the resulting performance is the most important.

anyway, my post was just showing that a single SB/IB core is not just (a little bit) faster, the difference can be huge, so having half the cores is not as bad as one might think it can be, even without HT and on heavy MT.

Well you mentioned Sandra explicitly in the text of your post and Cinebench is the first of two benchmarks in the other link that you gave. I commented on both of those.

The point is that neither Sandra nor Cinebench represent real world performance. You can find many sites stating how Sandra fails to represent real-world performance.

You don't caring about compilers and similar stuff doesn't imply other reading the thread don't care.

And finally I don't think that the difference in single core performance was so huge.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
I have no problem with that, since it shows that code optimization for a specific platform still pays off.

But there is another problem with those tests: except for Dhrystone ALU they are all hitting the FPU hard, which is a shared one in BD. According to the notes it seems they used just one module. So the BD Sandra results should be compared to SB with 1 core + HT.
http://www.xbitlabs.com/articles/cpu/display/amd-fx-8150_8.html#sect1
It will be interesting to see, how the removed decoder bottleneck and the better L1 I$ of SR will fare in this test.

As we know, Jaguar cores have their dedicated 128b FADD+FMUL FPU each. Interestingly with a nice 2 cycle latency for SSE MUL ops (1 cycle throughput). This means FP code in PS4 games will see twice the single precision FP throughput per clock as seen in a BD/PD module FPU as long as FMA is not allowed:
8 x 2 x 128 bit vs. 4 x 2 x 128 bit.
The short 2 or 3 cycle latencies might even help in more complex/less pipelineable or reg starving FP code.
Thanks to 2x the clock frequency PD will have only a similar throughput w/o FMA, at higher power consumption levels.

Code optimization for a specific platform is one thing, disabling performance in base to CPU brand-name is another.

I don't care about Sandra, but your thoughts were very interesting. Thanks!
 

Pilum

Member
Aug 27, 2012
182
3
81
As we know, Jaguar cores have their dedicated 128b FADD+FMUL FPU each. Interestingly with a nice 2 cycle latency for SSE MUL ops (1 cycle throughput). This means FP code in PS4 games will see twice the single precision FP throughput per clock as seen in a BD/PD module FPU as long as FMA is not allowed:
8 x 2 x 128 bit vs. 4 x 2 x 128 bit.
The short 2 or 3 cycle latencies might even help in more complex/less pipelineable or reg starving FP code.
Thanks to 2x the clock frequency PD will have only a similar throughput w/o FMA, at higher power consumption levels.
Well, not quite. BDs FPU has two additional pipelines for logic operations and blends, and the fourth can also do stores (FP stores occopy the MUL pipeline on Jaguar). BDs FPU has more scheduler and PRF entries than two Jaguar cores, and it can also execute two loads per cycle. Instruction window has the same overall size (128=2x64). But BD also has move elimination and branch fusion (only worth a few percent, but still). The high latencies may hurt performance in some cases, that's true. This should be partially compensated by BDs other architectural advantages.

Of course BD will not execute code perfectly, and even for FP-heavy code there will be many pipeline bubbles. There will also be frontend decode/dispatch problems. But clock-for-clock one module should reach at least 1.3x of a Jaguar cores performance. Coupled with the higher clockspeed, a 8350 will easily outperform a PS4 CPU under pretty much any circumstances, assuming the Jaguars are clocked at 2 GHz.

Of course you're right about the efficiency, BD will suck up much more power to achieve the same results. But apart from that, the PS4 APU is quite disappointing.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
He doesn't "need" to answer anything. As a matter of fact, considering your attitude towards just about everyone else on this forum I'm surprised anyone bothers to take you seriously anymore.

Really? Do you mean the 325,000+ members?

When are you going to answer Gus about what kind of system you run? I think he has asked you four times.

I think both of you would think about it.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
That's correct for nearly all cases (not talking about Atom here of course) as long as there is no second thread or process or OS task running on that Intel core.

Windows?

4 cores vs 4 cores. I don't care what 'background processes' you put on the intel chip but the i5-3570k will destroy the FX 4300.

Single core vs single core intel wins hands down.

The same slide says that the maximum number of cores is four. Now you need to answer two questions instead of one:

Why the PS4 includes an eight-core Jaguar chip if the slide says that the maximum is four?

Who said you that is was 8 core jaguar at 1.6 ghz vs 4 core jaguar at 1.6 ghz? Reference please.

The TPD of the four-core Richlands APUs @ 3.5 is of 35 W whereas the the TPD of a four-core Jaguar @ 2.0 is of 25W. Eight core jaguar will have a higher TDP...

If they could scale jaguar above 2 Ghz then considering its perf/watt I think they would just cancel steamroller. The slides end at 2.0 ghz (and from a marketing perspective its stupid not to highlight your assets). Cores is irrelevant from architectural limitations. Doubling the cores is essentially an easy engineering problem compared with doubling the frequency at a given power envelope.

There is a lot of conjecture that the ps4 will essentially be two chips on a chip, like a core 2 quad.


Yeah, have you tried running one of those apu's under load? the a10-4600m throttles to something like 2 ghz under prime95. Top clock is rarely reached unless its single core (and 3.5 is single core not quad core speed).
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Seriously. How can you not get the point? I was giving an example where adding cores DOES NOT increase TDP. You claim this:

The word linear was unnecessary on my part--I am merely trying to say that more cores != more TDP. If you tried to actually understand what I am saying rather than hunting for a flaw, even simple semantic ones, you would have known what I was trying to say.

You compared two different APUs with different graphics, different base frequencies, different turbo, one locked, the other a K series...

What about the FX-4320 vs the FX-8350? Same frequency, same turbo, no GPU, same cache per core... one is quad core (95W) and the other eight-core (125W).

Yes, as said before I am convinced that an eight-core Jaguar @ 2 GHz will have a higher TDP than a quad-core Jaguar @ 2 GHz (25W). If Jaguar scales as Piledriver then the TDP would be of about 33W.

AMD is already selling four-core Richlands APUs @ 3.5 GHz with a TDP of 35W. Your speculation that the PS4 has an eight-core @ 1.6 GHz because a quad-core @ 3.2 GHz "would use up too much power for a console." looks unconvincing for me.

Jaguar is a low power chip. Therefore, developers ask for more cores rather than more GHz

That is not true. The consensus among game developers was that

any more than eight, and special techniques would be needed to use them, to get efficiency.

That is why the PS4 has just eight-cores. It is the compromise between raw power and coding easiness.
 

Sleepingforest

Platinum Member
Nov 18, 2012
2,375
0
76
I will offer more evidence to support my idea of "more cores does not mean more power." If we look at the predecessor of Temash, Zacate, you will see that the E-240 and the E-450 have the same TDP: 18W. They have identical amounts of cache per core and have GPUs with identical characteristics save for the E-450's ability to turbo to 600MHz and a 8MHz boost to the base clock (500MHz versus 508MHz.

In fact, the E-450 looks to basically be two E-240 CPUs put onto one chip with a slight boost to the graphics core. Yet the TDP is the same. I believe that Bobcat is a better indicator of Jaguar's characteristics than Piledriver because their purpose and architecture are closer.

As for this statement:
That is not true. The consensus among game developers was that:
any more than eight, and special techniques would be needed to use them, to get efficiency.
I don't see where you get this from? From all the sources I have (admittedly just researched for an hour) looked at, it seems that the difficulty is in managing the latency after multithreading, but all multithreading is roughly equally difficult? This article, in particular, argues for the parallelization if individual components which are then rendered in a single-threaded engine due to the latency issue. I looked here for background enlightenment, but it brought up more questions than answers.

Could you clarify why 8 threads is less difficult than, say 9?
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Doubling the cores is essentially an easy engineering problem compared with doubling the frequency at a given power envelope.

Therefore, apart from no providing a single reference to your previous claim "1.6 vs 1.6" despite being asked to do that, you main point is that the same company who is selling quads @ 3.0--3.5 GHz on 35W could not design a future similar chip and decided to go for a custom eight-core @ 1.6--2.0 GHz on about the same TDP?

My point is as follow. The PS4 had what Mark calls a "developer-centric approach to design". It has been designed by developers (except higher technical details of course) in a .

Game developers know the benefits of multi-threading and we have been continuous scaling from single-thread games to dual, quad... up to the recently launched crysis 3.

We also know that Sony offered to game developers the choice of number of cores and they chose eight during the developer-questioning phase {*} as Mark recalled on GDC this year. They chose eight-cores by the same reason that Crytek chose eight-threads for its recent PC game.

Finally AMD provided Sony a custom chip with eight-cores.

Therefore it is not how some of you believe: that AMD put an eight-core design because some imagined power envelope engineering problem and then developers adapting to the design because "they'll have no choice" as someone said here.

{*} When discussing about the number of cores Mark recall:

The consensus was that any more than eight, and special techniques would be needed to use them, to get efficiency.

It definitely was very helpful to have gone out and have done the outreach before sitting down to design the hardware.
 
Last edited:

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
You compared two different APUs with different graphics, different base frequencies, different turbo, one locked, the other a K series...

What about the FX-4320 vs the FX-8350? Same frequency, same turbo, no GPU, same cache per core... one is quad core (95W) and the other eight-core (125W).

Yes, as said before I am convinced that an eight-core Jaguar @ 2 GHz will have a higher TDP than a quad-core Jaguar @ 2 GHz (25W). If Jaguar scales as Piledriver then the TDP would be of about 33W.

AMD is already selling four-core Richlands APUs @ 3.5 GHz with a TDP of 35W. Your speculation that the PS4 has an eight-core @ 1.6 GHz because a quad-core @ 3.2 GHz "would use up too much power for a console." looks unconvincing for me.

That is not true. The consensus among game developers was that

That is why the PS4 has just eight-cores. It is the compromise between raw power and coding easiness.

Bolded is undoubtedly true.

TDP!=power consumption

51142.png


60 watts seperates the two while the tdp would indicate 30.

trinity does not run at anywhere near max turbo speeds for multithreaded benchmarks

From notebookcheck (easy place to find the reviews). Note (I don't care what you say about cinebench, its a good way to load the cores up [and we are not looking at performance relative to anything else here).

GX60 review
According to the tools CPU-Z and HWiNFO, the processor's clock rate is noticeably boosted during load. 2.7 - 3.2 GHz in the Unigine Heaven benchmark. The programs also recorded between 2.7 and 3.2 GHz in Cinebench R10's single-core rendering. The multi-core rendering result was slightly more disappointing. The clock rate sometimes dropped below the default rate to about 2.0. This is strange since AMD's System Monitor displayed 2.3 GHz both times.

Acer V3 review

During our stress test (Prime95 and Furmark running simultaneously) the speed of the CPU fluctuated wildly (with the power adapter connected). The speeds ranged from 1800 MHz to 2700 MHz, and the four cores did not run at the same speed as each other. In fact, each core individually fluctuated in the above-mentioned range. The dual-graphics ran at 335 MHz. Now and then, the speed briefly rose to 500 MHz or 685 MHz. On battery mode, all cores ran at a constant 2.3 GHz. The dual-graphics ran permanently at 277 MHz.

We confirmed two trends during the Cinebench test. In the single thread test, the CPU ran at a constant 2.7 GHz (all four cores). In the multi-thread test the CPU cores operated at various speeds and continuously fluctuated between 2.3 and 2.7 GHz. Despite all this, the performance is at the expected level. To introduce the Trinity platform, AMD provided us with a notebook with an A10-4600M APU specifically made for performance testing. This notebook delivered similar results as our test model.

Hp g6

Despite its four cores and relatively high clock frequency, the A10-4600M cannot compete with the processing power of Intel's fastest CPUs. At best, the APU is on the level of the older Core i3-2330M, but is often considerably slower. Interestingly, the Turbo frequency of 3.2 GHz, promised by AMD, is only kept for a split second even in single thread applications such as SuperPi. Most of the time we observed a stable 2.7 GHz, although with all cores under load we registered only the base frequency of 2.3 GHz. Apparently AMD has miscalculated the TDP of the chip - with just Prime95 running without any additional graphics applications, we notice a slight throttling to 2.2 GHz.

Toshiba

The Turbo Core 3.0 functionality seems to work most of the time as intended, but some of the observations were a bit puzzling. When the CPU only was stressed, only one of the four pseudo-cores actually remained near the base clock rate of 2.3 GHz; the others hovered closer to 2 GHz. Rarely did the clock rates even brush up against 2.7 GHz. When the GPU only is under stress, we observe all four cores running at 2.7 GHz (though in subsequent testing the clock rates jumped around more often). Meanwhile, the GPU clock rates fluctuate about as wildly as the CPU clock rates did in our CPU stress test. Thermals for both tests hovered between 65°C and 70°C.

Finally, under full system stress (GPU and CPU), we once again witnessed wild fluctuations in clock rates, with the CPU cores ultimately resting around a value of 1.6 GHz each. The GPU clock rate occasionally jumped to its maximum Turbo Core frequency of 686 MHz, but mostly remained at 497 MHz, which is the base clock rate. As you can see from the GPU-Z graph in the screenshot we posted, there were also periods where the GPU clock rate dropped below its base clock rate to around 335 MHz, which qualifies as throttling. However, again, this only seems to occur under very heavy system stress. Thermals once again remained near 70°C throughout all of this, occasionally rising slightly higher (at one point to nearly 75°C).

Lenovo

In idle mode, the IdeaPad Z585 remains properly cool, thanks to the constantly running fan. The maximum temperature in the central region of the keyboard was at 33 °C (91.4 °F). We measured 27 °C (80.6 °F) on the palm rest – a really good value, which only increases to 36 °C (96.8 °F) under load. This is very good. The stress test, however, shows the weaknesses: on the left is the only outlet for the hot air. On this spot we measured 51 °C (123.8 °F) - not dangerous, but still very high. If we use the notebook on the lap while browsing the web, the temperatures remain in the green. But as soon as we start a demanding application, the notebook becomes uncomfortably warm.

Even under 100% load on the processor and graphics card via Prime95 and FurMark, we did not observe any throttling - lowering of the frequency of components. The CPU reached a maximum of 90 °C (194 °F) after two hours, running constantly at 2700 MHz.

Note: In the attached image it shows the cpu running at 2.3 ghz during the prime test.

0b8bccf9bd.jpg


If you also look through the reviews you will notice that this is a very good chip. 1.013v under load vs the 1.25v of some of the other chips.

Anandtech Review

(Just look at relative numbers please).

46660.png


46661.png


0.77 x 4 =3.08 which we do not see because trinity can't keep the clocks up in multithreaded loads.

Richland is supposed to fix this somewhat so hopefully we will see some improvements.

Also 3.5 ghz is the max SINGLE CORE frequency for Richland. I highly doubt that it will run 4 cores even at 3.0 ghz. Even intel can't get a 35 watt quad core ivy bridge to come in at at 3 ghz under load with ivy bridge which is much more efficient (35 watt i7-3632qm only clocks in a 2.9 ghz under 4 core load).
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Therefore, apart from no providing a single reference to your previous claim "1.6 vs 1.6" despite being asked to do that, you main point is that the same company who is selling quads @ 3.0--3.5 GHz on 35W could not design a future similar chip and decided to go for a custom eight-core @ 1.6--2.0 GHz on about the same TDP?

My point is as follow. The PS4 had what Mark calls a "developer-centric approach to design". It has been designed by developers (except higher technical details of course) in a .

Game developers know the benefits of multi-threading and we have been continuous scaling from single-thread games to dual, quad... up to the recently launched crysis 3.

We also know that Sony offered to game developers the choice of number of cores and they chose eight during the developer-questioning phase {*} as Mark recalled on GDC this year. They chose eight-cores by the same reason that Crytek chose eight-threads for its recent PC game.

Finally AMD provided Sony a custom chip with eight-cores.

Therefore it is not how some of you believe: that AMD put an eight-core design because some imagined power envelope engineering problem and then developers adapting to the design because "they'll have no choice" as someone said here.

{*} When discussing about the number of cores Mark recall:

No offense but crysis 3, though it uses up to 8 threads, sucks scaling wise. Its amdahl's law.

If you ask any dev what they want (assuming equal IPC) a quad at 3.2 ghz or a octo a 1.6. They will say the quad EVERY SINGLE TIME. No scaling inefficiencies and less trouble to code for.

They chose 8 because they needed 8. Crytek chose to support 8 core cpus because its wouldn't run well with only 4 cores (its a very cpu heavy game). Unless you are required to, no one is going to do more work (and spend more money) than they have to.

And if you are offered 4 jaguar cores or 8 jaguar cores why on earth would you say 4? If the devs were offered a 7970 class gpu (at the same or similar price) what do you think they are going to say? They are obviously going to take the most power available to them.

But basically sony said, "Our console can only use so much power for thermal reasons (xbox 360 RROD). How can we get the most power/watt at the best price?"

Jaguar was the answer. 4 core wouldn't cut the minimum needs (and they would have thermal budget to spare) so they chose 8 core. Piledriver is much larger (die for a piledriver module is much larger than two jaguar cores) and so more expensive. Jaguar is more efficient the piledriver (28 nm vs 32 nm and low power design). Jaguar is also designed for 28 nm, the same as the gcn architecture (so much easier to put on one chip--less r&d). This last thing is probably the largest reason. APU would be very hard if they had to move piledriver to 28 nm or gcn to 32 nm.

Have no doubt that had intel offered to sell i7 qm mobile chips at the same price sony would drop jaguar in a heartbeat.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
I will offer more evidence to support my idea of "more cores does not mean more power." If we look at the predecessor of Temash, Zacate, you will see that the E-240 and the E-450 have the same TDP: 18W. They have identical amounts of cache per core and have GPUs with identical characteristics save for the E-450's ability to turbo to 600MHz and a 8MHz boost to the base clock (500MHz versus 508MHz.

In fact, the E-450 looks to basically be two E-240 CPUs put onto one chip with a slight boost to the graphics core. Yet the TDP is the same.

You are esentially comparing a 1+80 core APU against a 2+80 core APU. The difference is negligible and both are rated at the same TDP.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
And if you are offered 4 jaguar cores or 8 jaguar cores why on earth would you say 4?.

You continue without offering any reference. I think you are just inventing this.

I have provided you evidence showing that game developers chose 8-core chips (not a specific Jaguar octo @ 1.6 GHz) but 8-core chips over 4-core chips and over 12- 16-core chips, during the developer-questioning phase, before sitting down to design the hardware.

Next during the hardware design phase, Sony chose an specific Jaguar octo @ 1.6 GHz.

Have no doubt that had intel offered to sell i7 qm mobile chips at the same price sony would drop jaguar in a heartbeat.

Doubt soooo much...
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
You continue without offering any reference. I think you are just inventing this.

I have provided you evidence showing that game developers chose 8-core chips (not a specific Jaguar octo @ 1.6 GHz) but 8-core chips over 4-core chips and over 12- 16-core chips, during the developer-questioning phase, before sitting down to design the hardware.

Next during the hardware design phase, Sony chose an specific Jaguar octo @ 1.6 GHz.



Doubt soooo much...

Its called common sense. They are going to want more. If a guy comes up to you and says, "You've won a free computer. Any computer you want (within reasonable limits)." You are going to get the best computer you can get. If you are offered two bags of money you are going to take the bag with more money in it.

Its simply the best fit (jaguar is). The devs also said that over 8 cores introduces scaling issues. Carmack said that given the same power he would always take that power on fewer cores. Also the fact that 12 or 16 core jaguar would blow the cpu tdp budget (4 x 4 core jaguar @ 25 watts = 100 watts).

The devs and sony basically had a choice. Intel was too expensive (though if they could they would go with intel at the same price any day--better perf/watt). Piledriver was expensive and quite power hungry. If they were going to go with an apu design then jaguar and gcn are on a much similar node compared to gcn and piledriver decreasing r and d costs. The four core chips that were available were either too weak or too expensive or problematic.

Sony got burned last cycle with a console that was too expensive. They don't want to make that same mistake.


How about we wait for richland to be tested before making claims about it?