• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question about quad cores and multi-tasking.

DrewSG3

Senior member
I'm now ripping and re-encoding my blu-rays to my hard drive to use my PC as a HTPC, when I'm using Handbrake to encode, it'll max out all 4 cores at 100% usage. But that doesn't bog down my system, if it were to run another program to convert my ripped Blu-rays to MKV while I'm encoding another video, would I risk messing up the encoding process?

I'm using a Q9300 btw..
 
handbrake will use all the cores that are available to it. this does not disallow other programs from using CPU resources if they need any. If any system or user processes need to do something while an encode is in progress, then one core will temporarily postpone its encoding duty, take care of your thing, and then go back to encoding.
 
you can still use the system because its probably low priority and the task your are doing only uses part of one core so its no big deal for the os to steal time away from handbrake. try to encode 2 things at once and its silly, every additional task just adds overhead, you are better off just letting them go one by one to encode. There is no benfit, as said handbrakes already using all 4 cores, that you can multitask doesn't mean theres headroom to encode some more.
 
I experience the same thing. Which means that not all the resources are used totally, which proves again that the Core 2 architecture's front end are underutilized, that explains the Inte's Nehalem approach, same execution resources, more threads, Nehalem execution engine isn't any wider compared to Penryn/Conroe.
 
no, it just means you are multitasking😛 does a single core doing two things at once bog down?

With my Pentium M 2.70GHz (Using a special adapter to use it on my Asus mobo), I could barely use my computer for anything else while encoding, and showed a very strong gaming performance that would rival the Athlon X2 4800, because the Pentium M was designed to maximize it's execution resource usage in a per clock basis. But with my Pentium 4 3.40GHz, that never happened because the Netbust architecture was never able to fill all its execution pipeline which was long as hell.
 
This is all done by windows no matter what you are slowing the first task down. Yes can you give up some cycles for say Firefox and not tell anything on your encode but you are taking away from the task not useing hidden resources. 🙄
 
With my Pentium M 2.70GHz (Using a special adapter to use it on my Asus mobo), I could barely use my computer for anything else while encoding, and showed a very strong gaming performance that would rival the Athlon X2 4800, because the Pentium M was designed to maximize it's execution resource usage in a per clock basis. But with my Pentium 4 3.40GHz, that never happened because the Netbust architecture was never able to fill all its execution pipeline which was long as hell.

so your telling me the pentium m never reported itself as 100% utilized by the encoder?
when i encode with my x4 using handbrake i don't feel bogged down by regular office/apps/browsing tasks, x4 isn't netburst like last i checked. pentium m was just dog slow which is another issue all together. the only thing i do remember is a single core would indeed get bogged down rather fast. but once you do increase the cores it becomes much harder to feel bogged down to the user. the user only has a certain amount of speed expectations for regular net use, and as chips get more cores and get faster it gets easier to do that level even while bogged down and not feel like it i guess. its possible the old single cores were just so slow you could feel it that much more. it just takes a lot of abuse before the user experience starts to degrade now.
 
Last edited:
The only way to get a problem with that is if you set that process to real-time priority, since that means that it has a higher priority than even input (e.g. even the IO thread doesn't get any cycles - actually nothing in the user interface runs at real time) or the taskmanager.
Needless to say, don't do that 😉

But otherwise you're fine, since the scheduler just gives the process a timeslice and other processes are still scheduled. Depending on the priority of the encoding process the impact may be larger or smaller - you can just play with the priorities of the process if you want to.

But in either case the result will always be the same, you can't mess anything up, the process itself doesn't even notice the scheduling, it just gets more or less CPU cycles and therefore takes longer or shorter time to complete its task.
 
so your telling me the pentium m never reported itself as 100% utilized by the encoder?
when i encode with my x4 using handbrake i don't feel bogged down by regular office/apps/browsing tasks, x4 isn't netburst like last i checked. pentium m was just dog slow which is another issue all together. the only thing i do remember is a single core would indeed get bogged down rather fast. but once you do increase the cores it becomes much harder to feel bogged down to the user. the user only has a certain amount of speed expectations for regular net use, and as chips get more cores and get faster it gets easier to do that level even while bogged down and not feel like it i guess. its possible the old single cores were just so slow you could feel it that much more. it just takes a lot of abuse before the user experience starts to degrade now.

Pentium 4, not Pentium M. Reread my post. A Pentium M 2.0GHz is as fast or faster than a Pentium 4 3.0GHz. Task Manager may report 100% of CPU usage, but that doesn't mean that all CPU resources are being utilized. Like I stated before, the Core 2 architecture front end went under utilized most of the time, and yet you can see a 100% CPU usage in heavily multi threaded applications, but you will not feel bogged down. Only Linpack can do that, which shows the best CPU usage scenario available, there's no real consumer application that can be compared with Linpack.
 
Pentium 4, not Pentium M. Reread my post. A Pentium M 2.0GHz is as fast or faster than a Pentium 4 3.0GHz. Task Manager may report 100% of CPU usage, but that doesn't mean that all CPU resources are being utilized. Like I stated before, the Core 2 architecture front end went under utilized most of the time, and yet you can see a 100% CPU usage in heavily multi threaded applications, but you will not feel bogged down. Only Linpack can do that, which shows the best CPU usage scenario available, there's no real consumer application that can be compared with Linpack.

You are speculating too much. Task Manager reporting has not much to do with execution unit utilization. Likely most people haven't used single core Core 2s because they are rare, while Pentium M is a single core. Pentium 4 was single core but it had HT.

Not even efficient 3-wide CPUs have their front-end utilized very well. I'm pretty sure if there was a Pentium M with HT it would be just as responsive.

You know, there's a way to test this. Do the same thing on Core Duo. It's pretty similar to Pentium M, but it has 2 cores. Load it up 100%, and see how responsive it is.
 
This is all done by windows no matter what you are slowing the first task down. Yes can you give up some cycles for say Firefox and not tell anything on your encode but you are taking away from the task not useing hidden resources. 🙄

^ THIS.

The operating system handles the scheduling of all threads on your machine, regardless of the cpu or how many programs you have running. Even while only running Handbrake, there are still hundreds of threads running in the background, and they all get a "time-share" slice of the cpu resources.
 
One way to maximize the cores is to assign affinities for tasks like encoding. I kind of developed such a habit from some time ago. For instance, assign 2 cores for a VM, 2 for another, and do other things on main OS. Or assign 4 cores for encoding and play games with the other 2, etc. Normally games will not be smooth with a processor-heavy task like encoding in the background. By confining encoding or VMs to 4 cores, I can have the other 2 cores left free for games or other stuff. My multi-tasking skill has improved a lot. :biggrin:

This can be especially useful for applications that are not friendly with 3/6 cores or HyperThreading. If your encoder doesn't support queues or for whatever reasons you feel like the encoder isn't maximizing the CPU and want to use another, try experimenting with processor affinities.
 
Last edited:
You are speculating too much. Task Manager reporting has not much to do with execution unit utilization. Likely most people haven't used single core Core 2s because they are rare, while Pentium M is a single core. Pentium 4 was single core but it had HT.

Not even efficient 3-wide CPUs have their front-end utilized very well. I'm pretty sure if there was a Pentium M with HT it would be just as responsive.

You know, there's a way to test this. Do the same thing on Core Duo. It's pretty similar to Pentium M, but it has 2 cores. Load it up 100%, and see how responsive it is.

Speculating? Have you proof to prove me wrong. How can you claim that adding Hyper Threading to the Pentium M will make it as responsive? The Pentium M has a very short pipeline with a wide execution engine which was designed to maximize IPC and hide cache latency. Means that there's very little or no bubbles in the stage pipeline. Adding Hyper Threading to an already efficient CPU without some modifications will simply lower the performance to a significant margin. Pentium 4 even without hyper threading was slow, and adding hyper threading did very little due to its extremely long pipeline and very narrow execution engine.

With your logic, we might just add Hyper Threading to the forums and for sure it will be just as responsive...
 
The Pentium M has a very short pipeline with a wide execution engine which was designed to maximize IPC and hide cache latency. Means that there's very little or no bubbles in the stage pipeline.

Really? We have nearly 15 years worth of processors that share the common 3-issue width and Out of Order, from the Pentium Pro all the way to the Phenom II. The IPC difference between them is quite big.. In fact, the Pentium M Dothan performed like the original Athlon 64.

The Phenom II is about 30% faster per clock than Pentium M and the Athlon 64.

Hyperthreading allows concurrent execution of independent threads which will help execution. The reason Pentium M/Core Duo/Core 2 did not feature Hyperthreading is while it is physically insignificant to implement(duplication of register files and such) but the validation process was complex. Even Netburst did not enable Hyperthreading till 1-2(depending on Xeon or Pentium 4) years after introduction.
 
I'm now ripping and re-encoding my blu-rays to my hard drive to use my PC as a HTPC, when I'm using Handbrake to encode, it'll max out all 4 cores at 100% usage. But that doesn't bog down my system, if it were to run another program to convert my ripped Blu-rays to MKV while I'm encoding another video, would I risk messing up the encoding process?

I'm using a Q9300 btw..

if your system is stable.. no you will not mess up the encoding process unless u eject the blue ray disk while its reading.

work will get distrusted as the OS determines priority on which program.

The worst case scenario if your system is 100% rock solid stable is longer encode times because of 2 jobs the OS has to now split up.

Really? We have nearly 15 years worth of processors that share the common 3-issue width and Out of Order, from the Pentium Pro all the way to the Phenom II. The IPC difference between them is quite big.. In fact, the Pentium M Dothan performed like the original Athlon 64.

The Phenom II is about 30% faster per clock than Pentium M and the Athlon 64.

Really? The P4-M Dolthan was basically a P4-M Yonah without EMT64 and single cored.
I dont see how a Dolthan which is equal to a C2D would lose to a A64, unless we look at multi threads.
Then its a simple matter of single core vs multi core on the PH II's.
 
Last edited:
Really? We have nearly 15 years worth of processors that share the common 3-issue width and Out of Order, from the Pentium Pro all the way to the Phenom II. The IPC difference between them is quite big.. In fact, the Pentium M Dothan performed like the original Athlon 64.

But not conroe.


http://www.anandtech.com/show/2594/3

Conroe was the first Intel processor to introduce this 4-issue front end. The processor could decode, rename and retire up to four micro-ops at the same time. Conroe’s width actually went under utilized a great deal of the time, something that Nehalem did address, but fundamentally there was no reason to go wider.

The Phenom II is about 30% faster per clock than Pentium M and the Athlon 64.

Hyperthreading allows concurrent execution of independent threads which will help execution. The reason Pentium M/Core Duo/Core 2 did not feature Hyperthreading is while it is physically insignificant to implement(duplication of register files and such) but the validation process was complex. Even Netburst did not enable Hyperthreading till 1-2(depending on Xeon or Pentium 4) years after introduction.

You are right on everything, but Conroe/Penryn would require the optimizations that were incorporated on Nehalem to gain real benefits with Hyper Threading, you just can't glue it to anything and expect to work flawlessly. Hyperthreading means that threads will share the execution resources and will fight for the resource usage, that's why is quite usual to see applications that actually loses performance when Hyper threading is used. All Pentium 4 had Hyper threading implemented, it was just not enabled. Look at Anandtech's Northwood review.

if your system is stable.. no you will not mess up the encoding process unless u eject the blue ray disk while its reading.

work will get distrusted as the OS determines priority on which program.

The worst case scenario if your system is 100% rock solid stable is longer encode times because of 2 jobs the OS has to now split up.



Really? The P4-M Dolthan was basically a P4-M Yonah without EMT64 and single cored.
I dont see how a Dolthan which is equal to a C2D would lose to a A64, unless we look at multi threads.
Then its a simple matter of single core vs multi core on the PH II's.

P4-M and P-M aren't the same. Pentium 4 M is based on the Netburst architecture, and the latest Pentium M is based on Dothan.

?! Conroe is waaaaaaaaaay faster than Dothan. Dothan and Hammer were very very close to each other clock for clock if Anand's article is correct.

http://www.anandtech.com/show/1610

A bit wrong in that, look at these reviews which showed the real strenght of the Pentium M when used with a good chipset and a Dual Channel memory controller.

http://www.techpowerup.com/reviews/ASUS/CT-479/1.html

http://www.pcper.com/article.php?aid=133&type=expert&pid=1

http://www.legitreviews.com/article/181/5/

http://www.xbitlabs.com/articles/cpu/display/pentiumm-780_12.html#sect1

It was almost too similar compared to Yonah in single threaded scenarios and Yonah is faster than an Athlon 64. Conroe had an updated architecture which showed great benefits specially with FPU which was Pentium M weakness, but it wasn't waaaaaaaay much faster than Yonah

http://www.anandtech.com/show/1880/3

http://www.anandtech.com/show/1900/6
 
Last edited:
A bit wrong in that, look at these reviews which showed the real strenght of the Pentium M when used with a good chipset and a Dual Channel memory controller.
Not sure which part of my statement you're disputing. Conroe >> Dothan or Hammer = Dothan?
 
It was almost too similar compared to Yonah in single threaded scenarios and Yonah is faster than an Athlon 64. Conroe had an updated architecture which showed great benefits specially with FPU which was Pentium M weakness, but it wasn't waaaaaaaay much faster than Yonah

Before I go to the main topic:

Core 2 was ~30% faster per clock than Pentium M. Actually, even 20% is a lot, which is what it can do over Yonah on desktop without the bottlenecks(FSB, hard drive). That number is something what AMD is struggling to achieve even now after Core 2. Doubling caches bring 5-7% increase in performance. 20% equals to 3.5 generations of cache doubling, which will have diminishing returns so more like 4 or 5.

Hyperthreading means that threads will share the execution resources and will fight for the resource usage, that's why is quite usual to see applications that actually loses performance when Hyper threading is used.

Pentium 4 used to be quite horrible with Hyperthreading, but Core i7 isn't. The losses nowadays are quite rare, like cases when the program does the static thread allocation rather than allowing Task Manger to do it.

Conroe’s width actually went under utilized a great deal of the time, something that Nehalem did address, but fundamentally there was no reason to go wider.

Sure, I'm not doubting that. But look again. If they put Hyperthreading on the Pentium M and the Athlon 64, there would have been a gain too. The top performing 3-issue processor is Phenom II. Do you think the potential for better IPC is exhausted? Plus that would have given responsiveness which is hard to benchmark.

So that's why I say if you want to prove your point, take a Core Duo and maximize like you did on other CPUs. Because that's a dual core it should be more responsive since Task Manager 100% doesn't equal CPU being at 100%.

(Actually we don't even need to run on Core Duo. Disable the cores in Nehalem to be 1 core and try enabling and disabling Hyperthreading and run tests on both)
 
Last edited:
Back
Top