• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

AMD APU not showing proper core amounts?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Sort of. The front-end is still shared, but at least each integer core now has its own instruction decoder.

A little more than sort of, scaling is 94.2% for 4 threads in 2 modules with Kaveri, up from 84.2% on a 2M Piledriver, that is for CB 11.5 of course.
 
A little more than sort of, scaling is 94.2% for 4 threads in 2 modules with Kaveri, up from 84.2% on a 2M Piledriver, that is for CB 11.5 of course.

That's 94.2% and 84.2% per core, or 88.4% and 76.8% for the module, right? When the second core is active you get 88.4%/76.8% more performance, since each core is capable of 94.2% and 84.2% throughput with SR and PD respectively.
 
Last edited:
I seem to have created quite the discussion on technical topics that I'm not versed well enough in to understand anymore 🙂 So did my original question ever get answered or is it still being discussed in there somewhere?

And should I then assume that my next CPU should be Intel as it doesn't have shared "cores" or something like that?
 
I seem to have created quite the discussion on technical topics that I'm not versed well enough in to understand anymore 🙂 So did my original question ever get answered or is it still being discussed in there somewhere?

And should I then assume that my next CPU should be Intel as it doesn't have shared "cores" or something like that?

Basically- Windows has scheduling tricks built in for Hyperthreading. By applying the same tricks to AMD modules it can improve your performance. The side effect of this is that it calls each module a "core" with two threads, like a Hyperthreaded core.

As for whether to buy AMD or Intel- well, depends on your needs. Benchmarks are your friend!
 
Basically- Windows has scheduling tricks built in for Hyperthreading. By applying the same tricks to AMD modules it can improve your performance. The side effect of this is that it calls each module a "core" with two threads, like a Hyperthreaded core.

As for whether to buy AMD or Intel- well, depends on your needs. Benchmarks are your friend!

Ok, that makes sense, but does my APU actually have 4 physical cores, or is it just technical trickery on AMD's part?
 
Ok, that makes sense, but does my APU actually have 4 physical cores, or is it just technical trickery on AMD's part?
You have 4 actual physical cores. There is no technical trickery in it.

A core is defined by;
A control unit which each core in the Bulldozer module has.
A instruction bus which each core in the Bulldozer module has.
A data bus which each core in the Bulldozer module has.
A datapath which each core in the Bulldozer module has.
 
Last edited:
And should I then assume that my next CPU should be Intel as it doesn't have shared "cores" or something like that?
No, your next CPU should be an Intel if you want to get more performance by spending more money. Want to not spend more money, or maybe you don't care about certain aspects of a system's performance? Then, it may not be so clear.
 
As I understand it, with scaling of about "80%" you actually only get about 160% performance out of two cores as you would with one.

Anandtech bench results of FX-6300:

xPaOJxq.png


470/6 = 78.33, which means all cores are performing at around 81.5% due to sharing. Using a second core in a module doesn't make it 80% faster, but rather both cores take a 20% hit so loading up the module fully you get about 60% more performance.


The only way to find how much throughput CMT can give you is to use two indepented cores (one each from two different modules) and then a full module.

The two independent cores will be the 100% throughput and the full module will give you the CMT scaling.

Going from single thread to 4 threads (2x full modules) only gives you the Multithreading scaling, not the CMT. 😉
 
Ok, that makes sense, but does my APU actually have 4 physical cores, or is it just technical trickery on AMD's part?
The world isn't so simple, with Bulldozer.

http://images.anandtech.com/doci/6201/Screen Shot 2012-08-28 at 4.38.05 PM.png

That's one module. Your processor has 2 of them. Shared L1 instruction cache (not shown), shared L2 cache, shared fetch, I think a shared register file, and shared floating point. Those shared resources act like 1 core. But, separate decoders (was shared, previously), and separate integer units entirely, acting like 2 cores.
 
That's 94.2% and 84.2% per core, or 88.4% and 76.8% for the module, right? When the second core is active you get 88.4%/76.8% more performance, since each core is capable of 94.2% and 84.2% throughput with SR and PD respectively.

With a core performing at 100 in ST Piledriver will do 4 x 84.2 = 336.8 on MT while Kaveri will do 4 x 94.2 = 376.8, a module will scale at 84.2 and 94.2% respectively, i dont know where you pulled the bolded %age from, for two cores within a same module you ll get halfs of the 4T scores i mentioned above, if two two threads are dispatched in two modules the scores will be of course 200.

Edit : i guess that you re talking of the delta between one and two thread within a same module, in that case your numbers make sense.
 
Last edited:
Does it really matter how many "cores" you have? The performance "is what it is" whether you define the cpu as having 4 cores or 2. Personally I consider it 4 cores that share some resources. AMD defines it that way as well, and that is the generally accepted definition.
 
Does it really matter how many "cores" you have? The performance "is what it is" whether you define the cpu as having 4 cores or 2. Personally I consider it 4 cores that share some resources. AMD defines it that way as well, and that is the generally accepted definition.

Pretty much this. Hyperthreading provides less than half a core's performance, while modules provide more than half a core's extra performance. I wouldn't call hyperthreading a core, but I would call a module 2 cores.
 
Saying that a chip with AMD style modules has a set number of cores seems the sort of simplification that obscures some important information either way.
 
Back
Top