Dual Processor management: isolating CPU for a certain process

Goosemaster

Lifer
Apr 10, 2001
48,775
3
81
Basically, I havea dual-cpu server that I use vmware on, and wanted to know if there was a way to automatically have all applications ignore the CPU. I currently use taskamanger to assign every task but my virtual machiens to the first CPU, and wanted to see if there was an automated way for instances when the machien needs to be rebotted etc, or in case of power failure..

XP Pro SP2

Thanks
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Doubtful, because it's largely pointless. The kernel scheduler will keep processes on one CPU as long as possible, so all you really end up doing by forcing their affinity is give the scheduler less options and probably starve some of those processes.

And hell whenever I use VMWare on XP CPU usage is my least concern, XP seems to hit the disk a lot more than it should be and that's probably a bigger performance problem for you.
 

Goosemaster

Lifer
Apr 10, 2001
48,775
3
81
Originally posted by: Nothinman
Doubtful, because it's largely pointless. The kernel scheduler will keep processes on one CPU as long as possible, so all you really end up doing by forcing their affinity is give the scheduler less options and probably starve some of those processes.

And hell whenever I use VMWare on XP CPU usage is my least concern, XP seems to hit the disk a lot more than it should be and that's probably a bigger performance problem for you.


I agree....it loves memory:D


Originally posted by: xtknight
You can hard code it into each program's exe with imagecfg.

http://www.robpol86.com/Pages/imagecfg.php


Why thank you.:D

quote from a MSKB Article:

IntFiltr improves performance and scaling of large computers that contain multiple processors with partitioning and affinity for tasks to particular processors. When properly configured, the technique of partitioning permits caches on the processors to be used more effectively, thereby improving performance and scaling.

Windows 2000 contains many features that permit threads and processes to have affinity to particular processors. IntFiltr uses Plug and Play features of Windows 2000 that permits affinity for device interrupts to particular processors. You may configure IntFiltr to connect the filter driver to devices with interrupts and to set the affinity mask for the devices that have the filter driver associated with them.

IntFiltr permits an administrator to direct a device interrupt to a specific set of processors. On a Windows 2000-based multiprocessor computer, the interrupt controller directs the processor services a device interrupt to any available processor, which means an interrupt with the lowest interrupt request priority. By using IntFiltr, an administrator can choose to override the default behavior when they configure any set of processors as the target for the device interrupt. Typically, this would involve choosing a single processor to be the target.

Interrupt filtering can affect the overall performance of your computer. However, no single algorithm produces the best performance under all possible workloads. This is why Windows 2000, by default, directs interrupts to any available processor. An administrator, however, may be interested in partitioning interrupts for certain devices to particular processors or experimenting with various configurations to find out the optimal configuration. Note that this tool permits any configuration, even ones that are not optimal.


So, Nothin' man, if I were to implement such a thing, would it still suffer from the limitations that you described?

 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I agree....it loves memory

Indeed it does, but I still think there's other I/O related performance issues on Windows. I've always noticed more disk activity when running VMs on Windows than I do on Linux.

So, Nothin' man, if I were to implement such a thing, would it still suffer from the limitations that you described?

Well the article piece you pasted is about interrupt affinity and not process affinity so the affects would be different and most likely less profound.

But with process affinity if you tie a process to a certain CPU you will increase the cache hits since the process will never bounce between CPUs, but if there are other things running on that CPU you might increase the latency of the processes since there's no way for the OS to schedule them to another CPU if the one you've tied them to is busy. It's a tradeoff and generally the effects are so low as to not be worth the trouble of messing with them.

Why are you doing this anyway, did you really noticed a difference in performance after you restricted all of your processes to certain CPUs?
 

Goosemaster

Lifer
Apr 10, 2001
48,775
3
81
Originally posted by: Nothinman


Why are you doing this anyway, did you really noticed a difference in performance after you restricted all of your processes to certain CPUs?

My intention was to isolate video encoding and other non-criticall tasks from network service VMs.

The only I ssue I have with linux is how to convert the 1TB + of data on my disks , most of which are @ 90% capacity. I was thinking about XFS, but Reiser4 looks very nice.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
My intention was to isolate video encoding and other non-criticall tasks from network service VMs.

But I really doubt it matters, have you noticed any difference in encoding times or VM response times since you did your affinity juggle?

The only I ssue I have with linux is how to convert the 1TB + of data on my disks , most of which are @ 90% capacity. I was thinking about XFS, but Reiser4 looks very nice.

That is a pretty big issue and there's no easy way to handle a migration like that besides backup/restore. And Hans Reiser has always had good ideas but the implementations have been less than stellar IMO and I still believe that as of reiser4. Hans has doesn't seem to know how to work with the other kernel devs so his projects tend to take longer to get integrated and end up not being quite like the others, that and he tends to get bored quickly and let the project stagnate. As soon as he started development on reiser4 he pretty much dropped reiser3 on the floor so SuSe had to step up and maintain it. I would recommend pretty much any other filesystem over reiserfs, but XFS is really good for huge filesystems and huge files.
 

Goosemaster

Lifer
Apr 10, 2001
48,775
3
81
Originally posted by: Nothinman
My intention was to isolate video encoding and other non-criticall tasks from network service VMs.

But I really doubt it matters, have you noticed any difference in encoding times or VM response times since you did your affinity juggle?

The only I ssue I have with linux is how to convert the 1TB + of data on my disks , most of which are @ 90% capacity. I was thinking about XFS, but Reiser4 looks very nice.

That is a pretty big issue and there's no easy way to handle a migration like that besides backup/restore. And Hans Reiser has always had good ideas but the implementations have been less than stellar IMO and I still believe that as of reiser4. Hans has doesn't seem to know how to work with the other kernel devs so his projects tend to take longer to get integrated and end up not being quite like the others, that and he tends to get bored quickly and let the project stagnate. As soon as he started development on reiser4 he pretty much dropped reiser3 on the floor so SuSe had to step up and maintain it. I would recommend pretty much any other filesystem over reiserfs, but XFS is really good for huge filesystems and huge files.

I have not seen any dramatic improvements except in terms of local system responsiveness, which isn't much of an issue.

Thanks for the advice btw. I didn't think I needed journaling to slow me down (ext3 is usually the slowest in all benchmarks of popular FS's) and XFS looked like the right fit. Basically, I am indeed resorting to backups and restores....

PITA though...:p
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Thanks for the advice btw. I didn't think I needed journaling to slow me down (ext3 is usually the slowest in all benchmarks of popular FS's) and XFS looked like the right fit. Basically, I am indeed resorting to backups and restores....

All of them do journaling, but yea ext3 generally isn't the fastest filesystem out there.

And if you actually need that data you had better have a backup plan in place either way. =)
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
I like ext3. It seems more resliant to me. But XFS is nice also. Just make sure you have a UPS for a computer with large amounts of data.


As for performance of VM between Linux and Windows I think it may have to do with how Linux is able to scedual I/O on the behaif of proccesses. Maybe it has to do with Kiobuf for 'Raw I/O' access or some sort of Direct to DMA thing. Not sure. But I think I've read that Linux is more flexible in this manner then Windows. OR maybe it's just that the Windows kernel is much more massive then Linux with all sorts of abstractions and stuff and it's more difficult to work around. Not sure.
 

Goosemaster

Lifer
Apr 10, 2001
48,775
3
81
Originally posted by: drag
I like ext3. It seems more resliant to me. But XFS is nice also. Just make sure you have a UPS for a computer with large amounts of data.


As for performance of VM between Linux and Windows I think it may have to do with how Linux is able to scedual I/O on the behaif of proccesses. Maybe it has to do with Kiobuf for 'Raw I/O' access or some sort of Direct to DMA thing. Not sure. But I think I've read that Linux is more flexible in this manner then Windows. OR maybe it's just that the Windows kernel is much more massive then Linux with all sorts of abstractions and stuff and it's more difficult to work around. Not sure.

Thanks for the info.

Hoenstly, I have only used XFS once, and didn't notice much a difference in reguar desktop usage. I've always used ext2 or ext3.

It looks like I will be shifting the server over to linux....
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Ya. I don't have any experiance between doing Windows vs Linux with vmware or anything. I've done Xen on Linux and that was unnoticably slower. I've just heard that VMs typically perform better on Linux. No idea if that is a fact or not.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
If it's a 'real' business server and you want to run real loads on the VM then maybe it's best to look at ESX stuff.

Or 'infrastructure' or whatever the hell they call it.
See the comparision of that vs server.
http://www.vmware.com/products/server_comp.html

They do similar thing, but are much different products. The 'infrastructure' thing utilizes a hypervisor. Like how Xen/Linux works (although with Xen/Linux Xen relies on the Linux host to provide I/O emulation.. which is faster then it sounds).

A hypervisor is a program or a kernel or a whatnot (techinical term is 'hypervisor') that sits on the lowest level of the computer and 'lifts' the client operating systems into a virtual machine environment.

This is different then how 'server' works were you have to run it on top of a operating system. With the hypervisor the Vmware stuff controls everything. CPU, I/O, and such. This way you don't have to worry about things like Linux vs Windows having scedualing conflicts or conflicts on how the I/O access patterns work. With a hypervisor it manages and sceduals everything and with Vmware the I/O happens very close to 'bare metal'. Manages memory. CPU. It'll handle SAN access and all sorts of hippy loving fun stuff.

Also if you have multiple servers with a Xen/Linux or Vmware hypervisor-based VM then you can do crackin' stuff like do 'hot migration' from one host to another as long as you have shared storage. This allows you to do things like load balance operating systems between computers or take hardware offline to do repairs and upgrades WITHOUT ANY DOWNTIME on the OS. Well there is downtime.. but it's measured in milliseconds usually.

There was a demo were people were playing Quake3 on a LAN setting with a Quake server hosted on Linux in a Xen/Linux environment. As the 20 or so people were playing they migrated the operating system between computers and the clients didn't even notice the change. At my local lug there is a fellow did a presentation. He deploys Xen professionally. He has done stuff like migrate entire datacenters from one reagon of the country to another to avoid hurricaines. I think it numbered in 500 systems or so.

So hypervisor stuff is GREAT. Wonderfull stuff. Being able to divorce the hardware from the software provides a huge benifit. (although again with Xen/Linux it uses the Linux kernel to help in abstracting hardware and device I/O. Still fast though, the kernel is hacked to provide special features for this sort of thing.)

That and having a hypervisor-based system provides better performance then hosting your VM in a operating system (like Windows virtual server or Vmware workstation and such). Also I've been told that Vmware's management software is nice.
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Originally posted by: Nothinman
Doubtful, because it's largely pointless. The kernel scheduler will keep processes on one CPU as long as possible, so all you really end up doing by forcing their affinity is give the scheduler less options and probably starve some of those processes.

And hell whenever I use VMWare on XP CPU usage is my least concern, XP seems to hit the disk a lot more than it should be and that's probably a bigger performance problem for you.

Not quite true. When a process blocks (on disk for instance), other processes can be assigned the CPU it was on, and the system will resume the process on the other CPU.

The biggest problem I've had with certain processes that tend to get switched continuously (even thought they are 95% CPU bound), is that since they are only running each CPU at 50%, Cool n' Quiet will slow them down. If those type of processes are given an affinity, Cool n' Quiet leaves the CPU at full speed. (until said processes become idle for a second).
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Not quite true. When a process blocks (on disk for instance), other processes can be assigned the CPU it was on, and the system will resume the process on the other CPU.

Maybe. Or maybe the other processes will finish before the I/O completes and the first process will resume on the same CPU, you can't guess what'll happen either way.

The biggest problem I've had with certain processes that tend to get switched continuously (even thought they are 95% CPU bound), is that since they are only running each CPU at 50%, Cool n' Quiet will slow them down. If those type of processes are given an affinity, Cool n' Quiet leaves the CPU at full speed. (until said processes become idle for a second).

If processes bounce between CPUs for no reason then it's a bug in the NT CPU scheduler and setting the affinity if all of your process is just a really bad hack to work around it.
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Originally posted by: Nothinman
If processes bounce between CPUs for no reason then it's a bug in the NT CPU scheduler and setting the affinity if all of your process is just a really bad hack to work around it.

Only certain processes do this. Most notably video encoding applications. Must have something to do with how windows works with video codecs.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Or application designers thinking they know better about optimizations then Microsoft does.

Bouncing between proccessors is _very_ bad. Each time you loose your cache and on a more modern NUMA-like machine like say you have a 'server' dual socket motherboard with each socket with it's own memory bank and each CPU you have installed is dual core.

(especially with multimedia! typically with encoding stuff your using a small set of instructions on a large amount of streaming data. I expect that codec programmers do their damnedist to make sure that their instruction set can fit into a cpu's cache as much as possible)


Proccessor affinity makes sense under certain circumstances, but I don't think it has to do with maximizing efficiency.

Usually it has to do with 'Quality of Service' style issues. Like say you have a scientific instrument that your computer has to monitor and timing is important. Then you can 'pin' one of the cpus to the proccess whose job is to record data on that instrument.

That way if there is a high load on your system it will be on the 'other' cpu and you'll have the other one mostly idle so it can react quicker and more reliably. So it makes it more realtime-like, as in the latency of the computer response is as low as possible.

I am sure that there are other examples.. but that is the one I know of.

In a VM environment it also makes sense so you can do things like give a OS a cpu, a disk, and a set amount of RAM so that if you have situations with users using up time on one hosted operating system then it won't affect the quality or performance of the other VM.
 

Smilin

Diamond Member
Mar 4, 2002
7,357
0
0
Originally posted by: glugglug
Originally posted by: Nothinman
If processes bounce between CPUs for no reason then it's a bug in the NT CPU scheduler and setting the affinity if all of your process is just a really bad hack to work around it.

Only certain processes do this. Most notably video encoding applications. Must have something to do with how windows works with video codecs.

Don't confuse process with thread.

If you have a process using 50% of each CPU that doesn't necessarily mean the process is bouncing back and forth between CPUs. It could simply be quite efficiently running on both at the same time.

 

Smilin

Diamond Member
Mar 4, 2002
7,357
0
0
Originally posted by: drag
This is different then how 'server' works were you have to run it on top of a operating system. With the hypervisor the Vmware stuff controls everything. CPU, I/O, and such. This way you don't have to worry about things like Linux vs Windows having scedualing conflicts or conflicts on how the I/O access patterns work. With a hypervisor it manages and sceduals everything and with Vmware the I/O happens very close to 'bare metal'. Manages memory. CPU. It'll handle SAN access and all sorts of hippy loving fun stuff.

MS Virtual Server only does this with trusted (read: Some Microsoft) host OSs with VM additions installed. Guest kernel calls get scheduled and executed directly by the host kernel instead of the emulated guest kernel.

This is most obvious during guest OS setup where VM additions are not present - setup runs like crap and takes forever. :(


 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Originally posted by: Smilin
Originally posted by: glugglug
Originally posted by: Nothinman
If processes bounce between CPUs for no reason then it's a bug in the NT CPU scheduler and setting the affinity if all of your process is just a really bad hack to work around it.

Only certain processes do this. Most notably video encoding applications. Must have something to do with how windows works with video codecs.

Don't confuse process with thread.

If you have a process using 50% of each CPU that doesn't necessarily mean the process is bouncing back and forth between CPUs. It could simply be quite efficiently running on both at the same time.

Actually, 2 threads taking turns waiting on each other is a perfect example of where you would want to set affinity. Let's say you have 2 threads thread A and thread B doing miniscule amounts of work before waking the other and going to sleep.
Thread A runs on Core 0 and wakes Thread B. Since Thread A is using Core 0, Thread B goes on Core 1. Thread B then wakes thread A and Thread A runs on Core 0 again (since Thread B has Core 1 in use till it goes to sleep). Each thread tends to keep to its own CPU.

Because of the way each of the 2 threads is running with just over a 50% duty cycle, assuming nothing else is using the 2 cores, Speedstep, Power Now, or Cool n' Quiet will slow down the CPU! But if you set the affinity, it will run at full clock speed.

Also, when 2 threads are switching back and forth like this, the chance is pretty good they are sharing data that you don't need to have transferred back and forth between the cores.
 

Rilex

Senior member
Sep 18, 2005
447
0
0
Originally posted by: Smilin
This is most obvious during guest OS setup where VM additions are not present - setup runs like crap and takes forever. :(

That is what VT/Pacifica are for ;)