Single vs Dual Core

Sparks1013

Junior Member
Aug 3, 2008
2
0
0
I'm positive this has been talked about before here but after spending 10 minutes searching I couldn't find the answer I'm looking for.

First, would a Dual Core @ 1.5 each = Single Core 3.0?

I already know about the 32bit and 64bit mechanics and how they affect the Dual Core. 32 will only use the first core.

Second, how exactly does a Dual Core work? What I mean is how is it different than a single core other than it splits the load on 64bit applications.

Which would be better out of the 2 mentioned above in a 64bit application? Why?

Overall what would be better for a gaming system?

Basically I want to know why Dual Core exists and how it differs in processing than a Single Core.
 

deamer44

Guest
May 25, 2008
168
0
0
From what i have learnt, 1.5 dual core does not equal 3.0 ghz, this is because dual core can only be used when the software application is written for 2 cores. The same occurs with quad cores ;a 2.4ghz quad core will run slower in most games (usually) than a 3.0 ghz dual core, because most games are only written for single or dual.

However I would reccomend a dual core for a gaming system as there will be more games comming out that support dual core, and thus giving you better performance in games. And also outside games when using multi applications.

I dont think 32 bit os will only use single core, to my knowledge it will use up to 4 cores (depending on the o's). I think vista destributes the use of the cores very efficiently (so im told). I also think that L2 cashe will also make abig difference from single to dual core.

Sorry about the reply being abit messy, hope this helps.

Tom
 

Sparks1013

Junior Member
Aug 3, 2008
2
0
0
Oh your reply was fine. I'm running vista on a dual core laptop but those specs above arent what I have, I just used those as an easy example. I have a gadget that shows what my 2 cores are up to and it updates in real time so I have a good grip on whats going on. I'm just a little confused about a few things.

I've been told that one core of a dual core is faster than a single core of the same speed; say its 1.5 and 1.5. But that both cores are not faster than a single core of that same speed, say 1.5+1.5(3.0) and 3.0.

I'm just trying to better understand that.
 

betasub

Platinum Member
Mar 22, 2006
2,677
0
0
The OP seems somewhat confused and reveals some misconceptions.

The handling of 32-bit code and 64-bit code is not a function of the processor being single or dual-core. Anandtech discussed the benefits of 64-bit enabled hardware on the release of the Athlon64 (single-core, 64-bit enabled) back in 2003.

As for the effective speed/power/IPC of single vs dual-core processors, they are best compared using benchmarks, not clock speed (GHz). A single-threaded application doesn't show any gain from the presence of additional cores, so it will run at the same speed on a single core as a multi-core processor of the same architecture and clock speed. Some benchmark suites run multiple applications simultaneously to simulate a certain type of heavy workload - not surprisingly multi-core processors handle this better than an equivalent single core. In OP's example, dual-core @ 1.5GHz = single-core @3.0GHz only in the perfect case: same architecture, multi-threading perfectly distributed across the available core.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Imagine you have a 100m race track.
You have a 2 runners who can run it in 20 seconds each, against a single runner who can run it in 10 seconds.

2 runners = 1.5GHz dual core, 1 runner = 3GHz single core.

If you want one parcel from one end to the other, the single runner can take it on his own in 10 seconds, and one of the other two guys can take it in 20 seconds, so the single is twice as fast.
If you need two parcels taking from one end to the other, then you can use both slower runners, so they can take 2 in 20 seconds, while the single runner will take 2x10 seconds, so 20 seconds total as well.

In some situations, 1.5GHz dual can = 3GHz single, but only if the workload can be done by both cores at the same time.

Pretty much any application can be multi-threaded (run on multiple cores), it doesn't need to be 32-bit or 64-bit. It's the application which needs to support running on multiple cores though.



Dual core exists because it has to.
CPU designers had a challenge: make CPU's faster. They could either make a single core go faster, or add more cores so you have more potential processing power (if the application can use multiple cores or you use lots of programs at the same time).
You can see with the Pentium 4 that increasing clock speed can be problematic. They got to ~4GHz and power consumption/heat output was monumental. They couldn't really go any higher. Then we got dual cores, which means you had two cores, and while heat output and power use was still high, there was a lot more potential processing power.

Dual cores are a solution to the problem of making CPU's faster, instead of increasing the speed of a single core. Now instead of increasing clock speeds significantly, we are continually adding more cores (so quad core CPU's will be out soon giving even more potential power). The only problem is that applications need to be written to take advantage of multiple cores (and 32-bit or 64-bit has no influence on this element).
 

tallman45

Golden Member
May 27, 2003
1,463
0
0
Try running a Norton Scan on a single 3ghz rig , then go do something simple like access the internet and read an anandtech forum, no going to be so great

Now do that on the 2 x 1.5ghz machine, far far better

The OS balances the tasks you do not need its not some special app written for dual cores, XP or Vista does most of the load sharing
 

mshan

Diamond Member
Nov 16, 2004
7,868
0
71
Anyone know if Mac OS X Leopard is specifically written for dual core processors?

Reason I ask is I want to create a music server using Leopard to play back music using iTunes Front Row Interface (Core Video, I think). Output would be via an Apogee Duet, which uses Core Audio.

Would a Conroe-L Celeron 400 series cpu be powerful enough to do this smoothly? (I like this option because it only consumes 35W) Or do I really need something like an e7200?

 

AleleVanuatu

Member
Aug 16, 2008
95
0
0
Dual-core is a polished turd and enthusiasts are loving it. 99% of apps run on a single core, and are completely bound by single-path execution.

Look at Firefox or IE. Run a JS intensive page, and watch the rest of your browser windows get all laggy as well. It's all running in the same process, and that's crap right there.

It's all about raw clock-speed in the end, on Core 2 of course. So 4 GHz matters, dual-core or quad-core, not so much.

Software is going to take 10-15 years to work with this stuff. Regular software I mean, not stupid encoding crap that everyone loves to toss out as if everyone rips dvds to x264. LoL. Just LoL.
 

tallman45

Golden Member
May 27, 2003
1,463
0
0
Software is available today, its been available for years to take advantage of Dual Cores, it called Microsoft Operating Systems

Go to your active tasks, You will see approx 40 active processes running by the OS. Windows 2000Pro all the way to current will load balance the active tasks over the 2 cores. I am running only Firefox as I type right now, but there are 40 other processes active and running , XP is load balancing over the multiple cores, I need to do nothing for this to occur. If my Norton Virus Scan were to start up, guess what , XP is going to put in on the Core with the least useage, automatically, See what happens on a single core processor when a virus scan starts, everything else stops cold, you may as well walk away

You do not need apps designed to take advantage of dual cores, you already have an OS that can load balance over the 2 cores
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
The reason that multiple core CPUs exist on the desktop is not because they are the most ideal solution (just like with GPUs, given identical performance, it is almost always better to have a single CPU rather than two, because applications do not scale 100%). The reason is that multiple cores are the only way to convincingly increase performance at the levels that consumers have demanded for years. You can no longer acheive 2x performance every year or so by simply revising the core logic and increasing clock speeds. The Pentium 4 is a prime example of this. A core specifically designed for high clockspeeds was not able to come close to its targets, because increasing clocks is not a feasible way of increasing performance once you reach the 3-4GHz range. And increasing performance by making the core more complex is not a good way either; doubling the number of transistors, along with heat/power, may provide a 40-50% increase in performance. That is not a good way to do things anymore.

So now the plan is to slowly increase the number of cores while simultaneously making small, power efficient adjustments to the core. You can see that with Phenom over A64 X2; it added more cores, and at the same time IPC was increased. Nehalem is another example, with more cores & threads but also a power-efficient increase in IPC.

There is no reason that you should buy a single-core CPU today. Most apps that people on here are going to be running take advantage of at least dual-core. The vast majority of games take advantage of dual-core CPUs and if you want good performance, a dual-core CPU is required. Having two processing cores also greatly improves system responsiveness when running multiple applications; there is a clear advantage over single-core even for someone not running an app that itself takes advantage of 2 cores.

If you have a single threaded app, Windows spreads the load over both cores unless you set affinity (in task manager, setting the app to run on a specific CPU).

 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
Originally posted by: AleleVanuatu
Dual-core is a polished turd and enthusiasts are loving it. 99% of apps run on a single core, and are completely bound by single-path execution.

Look at Firefox or IE. Run a JS intensive page, and watch the rest of your browser windows get all laggy as well. It's all running in the same process, and that's crap right there.

It's all about raw clock-speed in the end, on Core 2 of course. So 4 GHz matters, dual-core or quad-core, not so much.

Software is going to take 10-15 years to work with this stuff. Regular software I mean, not stupid encoding crap that everyone loves to toss out as if everyone rips dvds to x264. LoL. Just LoL.

Wow, thanks for one of the biggest mis-guided and misinformed post I have read today.

1. 99% of people run more then one app at once weather they want to or not. Take a look at your system manager one day if you don't believe me.

2. By "99% of all apps" I hope you really mean "99% of all games" Many applications are starting to take advantage of multi-cores and have been for years. Not just encoders (which, btw, it is extremely helpful for). Also, Im pretty sure that different tabs in firefox and IE are running on different threads.

3. It is not, never has been, and never will be "All about raw clock speed" a core 2 duo at 2.0 ghz CREAMS the pentium 4 at 3.6 GHZ. AMDs Athlon 64's also beat the P4s at similar clock speeds in pretty much every scenario. Architecture plays a large role in processing power and to say a "3.0 GHz" cpu is faster then a "2.0 GHz" cpu just screams ignorance.

Op, if there is any post you ignore on this forum, please let it be the one I just quoted. The author clear has no clue what he is talking about. Single core CPUs are almost all based on 2 year old architectures, while new dual and quad core cpus are all based on the latest and greatest. You won't find a single core CPU the can hold a candle to the current dual cores out there.
 

AleleVanuatu

Member
Aug 16, 2008
95
0
0
If you have a single threaded app, Windows spreads the load over both cores unless you set affinity (in task manager, setting the app to run on a specific CPU).

Are you on drugs? Have you ever even programmed before? I have 15 years of professional experience with C++, multi-threading, networking, and also Java and a bunch of scripting languages. About the only thing I haven't done to any serious degree is .NET -- and I've dabbled with it, just not been paid the big bucks to do it professionally. I'm not some amateur chimp sitting around screwing with some compilers and thinking he knows how to optimize quicksort on an embedded system, say the ARM.

Buddy, no single-threaded app will be spread over two cores, the single-threaded part implies that the damn thing is going to run in a single-path execution. That means that the process may be switched to one core, or to another, but will not run any faster, in fact it will run SLOWER, because of the context switches implied in swapping from one core to another. Do the math, and use some common sense and logic. Too many gimps dabbling with hardware with no clue how the software runs.

And to the bozo talking about WinXP spreading processes per core -- yes it will balance them out. But have you ever looked at what is being load-balanced? Tiny processes like indexing service, windows update, and anti-virus (biggest), and maybe your IM client. Honestly, this is hardly anything.

Let's compare to a real use-case scenario, say you're using Excel and sorting a table that has a ton of data, say 400MB worth. Now this is feasible, but will not be sped up with dual-core, or even quad-core. Why? Because the damn software can't parallelize the application. Same thing with a heavy JS page like say Digg's comment page, etc.

Use your heads and stop indulging the corporations that slap on another core and call it progress. Goddamn, the kids are eating up these 600 gram heatsinks, looking like motorcycle parts, in their boxes, hanging off their motherboards. Sure kid, add another LED and call it 'sick'. LoL. Just LoL.
 

AleleVanuatu

Member
Aug 16, 2008
95
0
0
Oh and to Cogman, you damn knave, twisting my words, I said that it's all about the raw clockspeed on the same architecture, namely Core 2. So a 4 GHz Core 2 vs a 3 GHz Core 2 is ALL about the clockspeed. You nut, this is the exact same reasoning everyone accepts about dual vs quad, where quad is not worth it, say the Q6600, because it's clocked so low. Jesus, Sweet Mother Mary!
 

AleleVanuatu

Member
Aug 16, 2008
95
0
0
And, being in the professional software industry for 15+ years, I can tell you that you're dreaming if you think that parallelizing algorithms are suddenly going to appear, be implemented, or hell be invented and debugged within the next 5 years. It's going to take 15 years, at least. Sure you'll get some simple parallelizable operations like encoding, hell even some game logic, but apart from that? LoL you're dreaming if you think everything can be split into parallel operations. Look into MPL and see how those scientific programmers have been trying this stuff for years, and how the threading bugs are biting them in the ass on a daily basis. And oh Erland? Yes it's natively parellizable, but look into its requirements and deficiencies. LoL Just LoL.

Throwing some water on the fire hype. That's my job, Alele Vanuatu is out.
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
Originally posted by: AleleVanuatu
And, being in the professional software industry for 15+ years, I can tell you that you're dreaming if you think that parallelizing algorithms are suddenly going to appear, be implemented, or hell be invented and debugged within the next 5 years. It's going to take 15 years, at least. Sure you'll get some simple parallelizable operations like encoding, hell even some game logic, but apart from that? LoL you're dreaming if you think everything can be split into parallel operations. Look into MPL and see how those scientific programmers have been trying this stuff for years, and how the threading bugs are biting them in the ass on a daily basis. And oh Erland? Yes it's natively parellizable, but look into its requirements and deficiencies. LoL Just LoL.

Throwing some water on the fire hype. That's my job, Alele Vanuatu is out.

What I took from your post "Im old, and I've programed, that makes me a computer engineering expert."

You would be pretty stupid to think that parrallelizm is going to go away soon. And even dumber to believe that programmer won't adapt.

How long has the dual core processor been around and used in the consumer market? About 4-5 years now. You don't think that programmers aren't going to be focusing on ways to utilize more cores?

And what exactally do you consider as "The other Apps"? Most office programs can be split into threads pretty easily (or are too simple to even make a difference one way or another). We've already covered encoding. And then there is games which even you are admitting are becoming more parallel in their programming.

Then you quote "Scientific applications" And that just makes me laugh. Gee, what the heck are supercomputers used for? Encoding? Yeah, dream on. I don't know of a single supercomputer in operation that isn't used for.. Guess what.. Scientific applications (Ok, I lie, some are used for video rendering.)

You consider yourself a programmer, yet you refuse to accept change. You probably won't have a job for long. Though, I guess at one time OOP was considered too expensive and a waste of time. But those guys still have a job maintaining their COBOL (for now at least).

(10 sec. before the "How dare you question my auth-or-itay" post. Sorry, after doing a fair amount of programming myself, not 15 years worth, I find it hard to respect someone purely on the basis that they can write a C++ program)
 

tallman45

Golden Member
May 27, 2003
1,463
0
0
Originally posted by: AleleVanuatu
And to the bozo talking about WinXP spreading processes per core -- yes it will balance them out.


News Flash,,,Spreading processes across Dual cores is Load Balancing


Simply put, Start a Virus Scan (easy , everyone does it) on a single core system, now go surf the web, Halt not going to far

Do that on a Dual core and presto, much better, you can actually do something

Dual cores benefit every user every day, PERIOD, any other argument is incorrect
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: AleleVanuatu
If you have a single threaded app, Windows spreads the load over both cores unless you set affinity (in task manager, setting the app to run on a specific CPU).

Are you on drugs? Have you ever even programmed before? I have 15 years of professional experience with C++, multi-threading, networking, and also Java and a bunch of scripting languages. About the only thing I haven't done to any serious degree is .NET -- and I've dabbled with it, just not been paid the big bucks to do it professionally. I'm not some amateur chimp sitting around screwing with some compilers and thinking he knows how to optimize quicksort on an embedded system, say the ARM.

Buddy, no single-threaded app will be spread over two cores, the single-threaded part implies that the damn thing is going to run in a single-path execution. That means that the process may be switched to one core, or to another, but will not run any faster, in fact it will run SLOWER, because of the context switches implied in swapping from one core to another. Do the math, and use some common sense and logic. Too many gimps dabbling with hardware with no clue how the software runs.

And to the bozo talking about WinXP spreading processes per core -- yes it will balance them out. But have you ever looked at what is being load-balanced? Tiny processes like indexing service, windows update, and anti-virus (biggest), and maybe your IM client. Honestly, this is hardly anything.

Let's compare to a real use-case scenario, say you're using Excel and sorting a table that has a ton of data, say 400MB worth. Now this is feasible, but will not be sped up with dual-core, or even quad-core. Why? Because the damn software can't parallelize the application. Same thing with a heavy JS page like say Digg's comment page, etc.

Use your heads and stop indulging the corporations that slap on another core and call it progress. Goddamn, the kids are eating up these 600 gram heatsinks, looking like motorcycle parts, in their boxes, hanging off their motherboards. Sure kid, add another LED and call it 'sick'. LoL. Just LoL.

If I'm so wrong, then explain this.

Here is Prime95 running 1 thread, taking up 50% of my dual-core CPU. This first image is without modifying anything, affinity is not set to either core:

http://www.imagebucket.net/buc...00&img=No_affinity.jpg

You can see that, although 50% of the CPU is being utilized (1 full core), neither core is utilized 100%; the load is roughly spread out between the two cores.

http://www.imagebucket.net/buc...mg=affinity_Core_1.jpg

Here affinity is set to core 1; Prime runs entirely on Core 1, only random background processes run on Core 0.

No I'm not a software programmer, but that does not mean that I am wrong. Windows does not by some magic create two threads out of a single thread with some kind of "reverse HyperThreading." But it spreads the load over various processing cores. That is why unless the CPU is utilized 100% or affinity is set, no single core every reaches 100% utilization. Threads are not tied to one CPU in Windows.

As for your rambling that multithreading is something far off in the software industry or not important, first of all I don't know where you are getting the idea that it is not common already. Virtually every demanding application that I run runs across 2 or more threads. Just about any modern game takes advantage of two cores, and a few more than two (Lost Planet, FSX, Supreme Commander, Assassin's Creed, UT3). Any kind of rendering takes advantage of 2, 4, 8, or more cores. More and more software is becoming multithreaded. So the reality is, yes programs are going multithreaded.

As for programs going multithreaded being necessary, it certainly is. The free lunch isn't over, but it isn't what it used to be. Single-threaded performance isn't going to increase by 2x every year or so anymore. Most gains will come from more processing cores. That is simply the way hardware is going, and software developers are going to have to program for it whether they like it or not. Software development is driven by hardware development, not the other way around. Unless you have a better idea of how Intel/AMD can continue to dramatically increase single-threaded performance without astronomical increases in power consumption, then you shouldn't be criticizing what they are doing either. You can put two cores on the die and increase performance by 80-90%, or you can increase the die size of a single core by 2x and increase performance by maybe 40%. I think hardware designers have made the right choice.
 

nerp

Diamond Member
Dec 31, 2005
9,865
105
106
Yeh, uhh, anyone looking at perfmon with a dual core should know by now that single threaded apps get spread across cores etc. Just get a quad and watch how 25 percent of each core is used when running one instance of prime. :)

I know it's hard to accept, but it's true. :)
 

AleleVanuatu

Member
Aug 16, 2008
95
0
0
Holy crap I think you guys must be kids. Guaranteed. LoL. Let's take this hardcore insane assertion and rip it apart.

Yeh, uhh, anyone looking at perfmon with a dual core should know by now that single threaded apps get spread across cores etc. Just get a quad and watch how 25 percent of each core is used when running one instance of prime.

Buddy, buddy buddy. It's called a context switch.

No I'm not a software programmer, but that does not mean that I am wrong. Windows does not by some magic create two threads out of a single thread with some kind of "reverse HyperThreading." But it spreads the load over various processing cores. That is why unless the CPU is utilized 100% or affinity is set, no single core every reaches 100% utilization. Threads are not tied to one CPU in Windows.

Again!

Guys, you gotta get with the program. what's happening here, is really simple ==> A process is started with one single-path execution. That execution is divided into time slices, and distributed over the cores, but IS not load-balanced. What that means is that for time 1 it runs on core0, time 2 it runs on core1, then back and forth flipping like that. But my friend, please understand, this DOES NOT SPEED IT UP -- its still single-path execution! In fact, when you set the affinity to use only one single core, you will notice it runs FASTER!

Why?

Because of less context switches. Try it :) Read about it. Check the amount of time Prime95 takes to generate 1MB of results. Jesus H Christ of Latter-Day Saints. Oh Gilgamesh, Oh FSM!!!!

Cogman, Cogman, Cogman.

Office apps are too simple to make a difference if they run slow or fast? Cmon buddy! Sort a 400MB excel table, and tell me, does it make a difference? Granted this is nuts, but real users, in the real world, DO THIS!

Yes, again, I said, game logic sure, easier to parallelize. EAX in software on one core, Enemy AI on another, Path finding on another, yada YADA YADA.

This is not most apps. This is not even close to 10% of algorithms. You guys need a a computer science lesson!

Read about parallelizing quicksort. Please, thank you!!!!

LoL. You guys are fun at least. LoL.


 

AleleVanuatu

Member
Aug 16, 2008
95
0
0
Yeh, uhh, anyone looking at perfmon with a dual core should know by now that single threaded apps get spread across cores etc. Just get a quad and watch how 25 percent of each core is used when running one instance of prime.

ONE MORE TIME! Run task manager and notice that it's not 25% of each core, but one full core at 100% load! LoL!
 

AleleVanuatu

Member
Aug 16, 2008
95
0
0
Also, please GUYS, remember that this is going to take 15 years to see any real benefits! Also, just look at games, your pet software, and read developer interviews where they ADMIT that they can't parallelize beyond quad-core, because they will have to split individual algorithms across cores -- requiring something like MPL! LoL!

We need more clockspeed, more more more! Not just cores! At least for the next 15 years to really be able to see differences beyond crappy benches like encoding, winrar 3.7 (threading on), ETC! We need real deal not synthetics!
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: AleleVanuatu
Also, please GUYS, remember that this is going to take 15 years to see any real benefits! Also, just look at games, your pet software, and read developer interviews where they ADMIT that they can't parallelize beyond quad-core, because they will have to split individual algorithms across cores -- requiring something like MPL! LoL!

We need more clockspeed, more more more! Not just cores! At least for the next 15 years to really be able to see differences beyond crappy benches like encoding, winrar 3.7 (threading on), ETC! We need real deal not synthetics!

I said this earlier yet you gave no response; if you have found a way to get CPUs to run at 10, 20, 30GHz and "more more more," then I'm sure Intel and AMD would love to hear about it. If if it were possible to simply increase frequency to no end, then Intel would still be using single-core Pentium 4's, shrunk down to 45nm with an extremely small die, and running at 20GHz. It would be a lot cheaper to simply keep increasing the core multiplier along with increasing bandwidth when necessary, than spending immense die area on multiple processing cores. What you propose simply is not possible, you have to get that through your head. Improvements in single-threaded performance per clock from now on will be very small and clock speed is going to inch up, not increase significantly.

Software developers are not idiots, they realize this as well. Coding for multiple threads will be the only way to convincingly increase performance from now on. And I repeat once again, it is not taking 15 years to do anything. Most applications that require more performance are multithreaded already.

Guys, you gotta get with the program. what's happening here, is really simple ==> A process is started with one single-path execution. That execution is divided into time slices, and distributed over the cores, but IS not load-balanced. What that means is that for time 1 it runs on core0, time 2 it runs on core1, then back and forth flipping like that. But my friend, please understand, this DOES NOT SPEED IT UP -- its still single-path execution! In fact, when you set the affinity to use only one single core, you will notice it runs FASTER!

When did I say it ran faster? It does not and I never suggested that. What Windows does will not increase performance; there is no mythical "reverse HT" going on here.

Office apps are too simple to make a difference if they run slow or fast? Cmon buddy! Sort a 400MB excel table, and tell me, does it make a difference? Granted this is nuts, but real users, in the real world, DO THIS!

Then you'll be glad to know that Excel 2007 is highly multithreaded.

http://techgage.com/article/in..._2_quad_q9450_266ghz/8

And for someone that is claiming some kind of age superiority, you use the expression "LoL" far too often.

 

rogue1979

Diamond Member
Mar 14, 2001
3,062
0
0
Originally posted by: AleleVanuatu
Holy crap I think you guys must be kids. Guaranteed. LoL. Let's take this hardcore insane assertion and rip it apart.

Yeh, uhh, anyone looking at perfmon with a dual core should know by now that single threaded apps get spread across cores etc. Just get a quad and watch how 25 percent of each core is used when running one instance of prime.

Buddy, buddy buddy. It's called a context switch.

No I'm not a software programmer, but that does not mean that I am wrong. Windows does not by some magic create two threads out of a single thread with some kind of "reverse HyperThreading." But it spreads the load over various processing cores. That is why unless the CPU is utilized 100% or affinity is set, no single core every reaches 100% utilization. Threads are not tied to one CPU in Windows.

Again!

Guys, you gotta get with the program. what's happening here, is really simple ==> A process is started with one single-path execution. That execution is divided into time slices, and distributed over the cores, but IS not load-balanced. What that means is that for time 1 it runs on core0, time 2 it runs on core1, then back and forth flipping like that. But my friend, please understand, this DOES NOT SPEED IT UP -- its still single-path execution! In fact, when you set the affinity to use only one single core, you will notice it runs FASTER!

Why?

Because of less context switches. Try it :) Read about it. Check the amount of time Prime95 takes to generate 1MB of results. Jesus H Christ of Latter-Day Saints. Oh Gilgamesh, Oh FSM!!!!

Cogman, Cogman, Cogman.

Office apps are too simple to make a difference if they run slow or fast? Cmon buddy! Sort a 400MB excel table, and tell me, does it make a difference? Granted this is nuts, but real users, in the real world, DO THIS!

Yes, again, I said, game logic sure, easier to parallelize. EAX in software on one core, Enemy AI on another, Path finding on another, yada YADA YADA.

This is not most apps. This is not even close to 10% of algorithms. You guys need a a computer science lesson!

Read about parallelizing quicksort. Please, thank you!!!!

LoL. You guys are fun at least. LoL.

A dual core gives significant benefit to today's average desktop user, period!

Just because there is still software out there that can't use a dual core, the operating system and minor multi-tasking can and certainly do benefit from more than one core.

In a scientific experiment you can probably get most programs to run the same speed on a single core as opposed to a dual core. But when you factor in an antivirus program running, IM, browsing web pages, listening to music at the same time, that single core is not going to run that single threaded application as fast.

I think most of us expect our computers to multi-task smoothly and efficiently, not like a 486 where you could literally only do one thing at a time.
 

SunSamurai

Diamond Member
Jan 16, 2005
3,914
0
0
Originally posted by: AleleVanuatu
Also, please GUYS, remember that this is going to take 15 years to see any real benefits! Also, just look at games, your pet software, and read developer interviews where they ADMIT that they can't parallelize beyond quad-core, because they will have to split individual algorithms across cores -- requiring something like MPL! LoL!

We need more clockspeed, more more more! Not just cores! At least for the next 15 years to really be able to see differences beyond crappy benches like encoding, winrar 3.7 (threading on), ETC! We need real deal not synthetics!

Clockspeed is going to stay under 5ghz for 98% of desktop systems. Deal with it. The only things that are going to improve are how many things get done per cycle. How many cycles X process or register takes to execute.

That and better software design, better MB design and cache design.