• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

is dual core just a crutch for poor OS scheduling?

tynopik

Diamond Member
one of the major claims in favor of dual core is improved responsiveness, and that one cpu intensive task doesn't bog down the entire system.

However, could this same result be achieved simply though an improved scheduling algorithm in the OS? The improved responsiveness is apparent on dual-core even when there are FEWER total cpu cycles compared to a single-core system

So all dual core is doing is FORCING a HARD CAP on how much cpu power a single process can take, seems like that would be something that is easy emulate in software
 
No, Dual Core allows true multitasking while an OS scheduling algorithm is not really doing two (or more) processes at once -- but rather switching between them so fast that it seems like it's doing several things at the same time to the user. Because of this, dual core will truly offer more responsiveness to the system.

Note that most modern CPU do offer the true ability to handle multiple tasks at once -- even without dual core. This is achieved through pipelining and multiple functional units inside the CPU. I actually think hyperthreading might be more responsive because I think it offers more fine grain parallelism compared to having dual cores (which offers coarser parallelism).
 
but if a single core had the same total number of cpu cycle available to it + a little bit extra for task switching overhead, there's no reason it can't accomplish the same things in the same amount of time, and hence be just as responsive

true it can't do two things at the EXACT same time, but the time slices can be made small enough that we would never notice the difference
 
It's really a crutch for limits on how fast a single core CPU can be clocked and still manufactured and cooled for a reasonable cost.

It's true that a 6 GHz single core CPU with good OS scheduling would be more useful than dual-core 2 x 3 GHz with current applications, but manufacturing technology just isn't up to the task.
 
Yes, it's what Dave said. We are starting to run into limits with how fast we can clock CPUs so now they are starting to think about having multiple CPUs and trying to split up the work among them. Unfortunately, there are limits to that approach too (eventually the communication overhead can negate any advantage over doing the work in parallel). And there are some fundamental limits too. For example, I think the fastest serial sorting (based on comparison) algorithm is nlog(n) while the equivalent parallel algorithm can sort in log(n).

They actually thought about the dual core approach a long time ago. Who here remembers the Inmos Transputer? That thing was quad core and was designed for efficient message passing based parallel computation. Atari was supposed to come out with a unit based on it -- I was so excited but it never really came out in the mass market.
 
Switching back and forth is good and all, but what about when you are doing something very cpu intensive? It's very hard to do that. You can't play doom3 and encode a video to divx with a normal pc, unless it's a dual processor system. With dual core you can do that. TRUE multitasking.

Using email and word at the same time is hardly considered true multitasking. Oh man oh man... you're sending and recieving while you're changing the font color.. OMG OMG
 
Originally posted by: V00D00
Switching back and forth is good and all, but what about when you are doing something very cpu intensive? It's very hard to do that. You can't play doom3 and encode a video to divx with a normal pc, unless it's a dual processor system. With dual core you can do that.

if the single core has the same number of cpu cycles available to it as the dual core, there's no reason it shouldn't be able to perform just as well

although i agree, it just doesn't work with current OSs

 
Originally posted by: tynopik
Originally posted by: V00D00
Switching back and forth is good and all, but what about when you are doing something very cpu intensive? It's very hard to do that. You can't play doom3 and encode a video to divx with a normal pc, unless it's a dual processor system. With dual core you can do that.

if the single core has the same number of cpu cycles available to it as the dual core, there's no reason it shouldn't be able to perform just as well

although i agree, it just doesn't work with current OSs

1. fix it
2. profit!!!
 
Let's say you have a single core CPU. It has six cycles available for it. It needs to do 2 tasks each taking up 3 cycles. So, it performs them like:

1
1
1
2
2
2

If it was multitasking them (switching between them), it would look like:

1
2
1
2
1
2

The total time is still 6 cycles.

For a dual core CPU, it can do this:

1 2
1 2
1 2

For a total time of 3 cycles. It does 2 cycles at the same time.

Now, in order for the single core CPU to do it in the same amount of time, it would have to be clocked at 2x the speed of the dual core. And as we mentioned, we are starting to run into limits with how fast we can clock these CPUs. That's why they're looking into dual cores and stuff.
 
Originally posted by: shoRunner
1. fix it
2. profit!!!

unfortunately Microsoft hasn't hired me yet 🙁


Originally posted by: StormRider
Now, in order for the single core CPU to do it in the same amount of time, it would have to be clocked at 2x the speed of the dual core.

or the dual core would have to be at 1/2 the speed of the single core, whichever way you want to look at it, but yes that was part of my assumption, note:

"if the single core has the same number of cpu cycles available to it as the dual core"

i think we're talking about two different issues, yes to keep increasing raw total power, we need to move to dual cores and beyond. However there are a significant number of people who claim that they would prefer dual cores EVEN if they were less than half the clock speed of the top of the line single core because of improved responsiveness. Or maybe less extreme, those that claim the benefits of dual core are more than just the increased raw power.

this is the issue i was looking at

even in anand's dual core preview, there were some tasks where the dual-core performed significantly better than it's total clock-speed advantage would lead you to believe
 
Dual core it to make up for lack of keeping cpu core speeds in check with progress of before. Look at chart! GHz has leveled off.
 
Dual core has all the benefits of SMP in one chip. Bringing it to the masses will help to get more SMP friendly programs into the mainstream.
 
Originally posted by: tynopik
but if a single core had the same total number of cpu cycle available to it + a little bit extra for task switching overhead, there's no reason it can't accomplish the same things in the same amount of time, and hence be just as responsive.

No. You are confusing two different quantities here - throughput (bandwidth), vs. latency. A true dual-core system, can offer lower latencies than a faster single-core system, even if the overall throughput is equivalent. This may also be why some P4 HT systems "feel" faster to some people, even though in benchmarks, the A64 beats it. Those benchmarks primarily test throughput, and don't test execution (or interrupt-event-based) latency. But latency is extremely critical in terms of the user experience with the system.
Originally posted by: tynopik
true it can't do two things at the EXACT same time, but the time slices can be made small enough that we would never notice the difference
To make the time-slices that small, would result in the relative overhead of the time-slicing mechanism itself (the scheduler and the timer interrupts that drive its operation) being so high, that it would severely cut into the execution throughput of your system. IOW, while you can reduce the size of the timeslices, and that is often a good idea for dealing with processing tasks involving fine-grained timing such as multimedia work. (This is something that BeOS excelled at as well - it ran the scheduler timer at a finer-grained quantum setting than Windows and other desktop OSes normally do as well in order to achieve that result even on uni-proc systems. BeOS excelled even more on SMP systems.)
 
Originally posted by: tynopik

or the dual core would have to be at 1/2 the speed of the single core, whichever way you want to look at it, but yes that was part of my assumption...

But your assumption is not taking into account the limits of pipelining instructions to each CPU. The dual-core CPU would still benefit
from being able to run on lane ahead of the single core unit, all other stats being equal.

You also seem to imply that the chip maker would deliberately fab a dual-core CPU that is designed to run at half the speed of
existing single cores, when the manufacturing process is already in place to design those cores to run at close to the full
speed already established by the single core CPU.
 
Originally posted by: VirtualLarry

To make the time-slices that small, would result in the relative overhead of the time-slicing mechanism itself (the scheduler and the timer interrupts that drive its operation) being so high, that it would severely cut into the execution throughput of your system. IOW, while you can reduce the size of the timeslices, and that is often a good idea for dealing with processing tasks involving fine-grained timing such as multimedia work. (This is something that BeOS excelled at as well - it ran the scheduler timer at a finer-grained quantum setting than Windows and other desktop OSes normally do as well in order to achieve that result even on uni-proc systems. BeOS excelled even more on SMP systems.)

Precisely my thoughts - plus you have to add in the inhereant latencies in fetching data from RAM to the caches when switching threads. Hence the reason you don't run multiple benchmarks at the same time to "stress" the system.
 
Remember that context switches take in the order of microseconds to occur, and you lose all cache context, branch predictor table entries, and whatever memory structures that have been warmed up by the previous running process/thread. Going to OS is always gonig to be slower.
 
Back
Top