• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Explain multiple cores to me..........

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I need to better understand multi core cpu's. Does someone have a good link the describes how each core handles data? 2 3.0GHZ cores does not magically equal 6 GHZ does it?

simple... redesigning a processor to be faster while making use of the fact that physical size halves every 18 months thanks to miniaturization (resulting in more space to fill if you keep it the same size) is difficult.
The solution is ingeniously simple... make 2, or more, idnetical CPUs and put them together.
First you had multi socket motherboard, which had 2 or more sockets for a CPU. (those are still around btw, and are used with multi-core CPUs in servers)

Then you went into was literally two processors one next to the other
http://en.wikipedia.org/wiki/Multi-Chip_Module

Later on it was modified to actually have them connected and sharing certain things (such as communication controller and cache). Each chip is literally an entire CPU by itself. This system relies on software to make use of it...

Naturally software is wholly incapable of doing so without immense investment of time and effort, per program. So only certain subsets of programs enjoy the benefits, and older programs that weren't updated don't benefit AT ALL.

There have been papers written about a "coreless" structure that would allow perfect scaling with more execution resources without any "cores" (or rather, one single core); this means performance scaling without having to rewrite software to use more and more cores... sounds pretty ideal but it requires major rework of the structure of the CPU and will take quite some time to achieve.

BTW: Wikipedia to the rescue: http://en.wikipedia.org/wiki/Multi-core_processor
 
Last edited:
Wait a minute, are you trying to say that a well made application which is cpu intensive will claim all cores so heavily that it causes a multicore system to be unresponsive? 😵 have you ever heard of timeslices and thread priorities?

Jeez.
 
Wait a minute, are you trying to say that a well made application which is cpu intensive will claim all cores so heavily that it causes a multicore system to be unresponsive? 😵 have you ever heard of timeslices and thread priorities?

Jeez.

who is saying that?
 
Wait a minute, are you trying to say that a well made application which is cpu intensive will claim all cores so heavily that it causes a multicore system to be unresponsive? 😵 have you ever heard of timeslices and thread priorities?

Stop trolling. I think I was perfectly clear in what I meant.
 
Another point, as process tech continues to shrink and transistor budgets continue to double every 18 months, it gets increasingly impossible to design a monolithic CPU.

You can only throw so many man-hours at a CPU, as a team of 10000 engineers is often less efficient than, say, 800. You can use computer-algorithms to draw some schematics, but you always end up with worse chips than those from a skilled engineer. And as AMD still exists, there's competition for getting a new architecture/process node out ASAP.

So you run into an issue that you simply can't design CPUs in step with the doubling transistor budgets, what do you do?

A: You can add gobs more cache (see every Intel tick ever), but eventually it no longer offers enough performance/die size.

B: You design a single, smaller core and duplicate it many times over, adding some communication uncore.
 
Wait a minute, are you trying to say that a well made application which is cpu intensive will claim all cores so heavily that it causes a multicore system to be unresponsive? 😵 have you ever heard of timeslices and thread priorities?

Jeez.

ok, I found the post...
And:
1. He is right
2. timeslices and thread priorities exist for single cores as well
3. timeslices and thread priorities are imperfect, which is why windows becomes unresponsive in 100% CPU usage, regardless of how many cores you have.
 
Last edited:
Wait a minute, are you trying to say that a well made application which is cpu intensive will claim all cores so heavily that it causes a multicore system to be unresponsive? 😵 have you ever heard of timeslices and thread priorities?

Jeez.

yeah.. when i set priority to highest while encoding.

Then i'll see all 12 threads light up, and i will sort of lag.

But thats because i set priority on highest.
 
yeah.. when i set priority to highest while encoding.

Then i'll see all 12 threads light up, and i will sort of lag.

But thats because i set priority on highest.

or if you run an intensive program that automatically sets its own priority to highest 😛
or if you are using a program that sets its own priority to idle because it underestimates how responsive you want it to be.

i have seen those things happen.
 
No, right actually. You just didn't understand that 'more efficient' does not necessarily equal 'faster'.
The 'more efficient' means that it's easier to extract performance from a single thread running on a 2 GHz core than from two threads running on a 1 GHz core each.

If you want the theoretical background to it all, I'll just refer you to Amdahl's law.

if efficiency="The extent to which time is well used for the intended task"
then yes he is right.
 
if efficiency="The extent to which time is well used for the intended task"
then yes he is right.

Yes, efficiency according to dictionary.com:
"The ratio of the effective or useful output to the total input in any system."
My example proposed two CPUs that would theoretically have the same maximum performance. The result is that in this particular case the most efficient of the two is also the fastest.
 
Yes, efficiency according to dictionary.com:
"The ratio of the effective or useful output to the total input in any system."
My example proposed two CPUs that would theoretically have the same maximum performance. The result is that in this particular case the most efficient of the two is also the fastest.

In the x86 world, you will never see a 100% scaling in multi threaded scenario, may be except in benchmarks. The best way to gain performance would be to increase the IPC per core which scales very linearly, but it is pretty much a very hard task to accomplish.

Originally posted by: taltamir

"Those are all due to the cost, effort, and time it takes to develop more efficient cores. the dual core is a simple, easy, and inefficient way to throw more transistors at the problem. It results in a faster overall chip because, despite being inefficient, you ARE throwing more transistors at the problem, and it is so much easier to develop then an actual architectural improvement to a single core"

"This is not unfeasible as GPUs have shown that it is entirely possible to make a single core with massive amount of execution resources, and scale that amount freely.

x86 simply does not allow such flexibility in its ALU resources."

Makes perfect sense...
 
Last edited:
Back
Top