• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

MPI on dual core processors?

f95toli

Golden Member
Does anyone know if there are any plans for a MPI imlementation that supports dual-core processors?
And if so, will the two cores "behave" as two processors (from the MPI point of view that is, i.e. can I run one process on each core)?

Or maybe I should start by asking if such an implementation is even possible?
 
I would guess that MPI won't care - the OS takes care of scheduling processes & threads across CPUs and/or cores on CPUs. MPI operates well above this level, and should work just as it does on multi-cpu machines.
 
Given that the dual-core CPUs have shared memory, if you could, it would probably be most efficient to count each package (rather than each core) as a node so you don't pay the overhead for passing messages when you could just use the shared memory system.
 
Originally posted by: CTho9305
Given that the dual-core CPUs have shared memory, if you could, it would probably be most efficient to count each package (rather than each core) as a node so you don't pay the overhead for passing messages when you could just use the shared memory system.

But that requires an implementation that "understands" that a dual core processors can be efficiently used with two processes, otherwise there is no way to use the optimal topology in the program.

The cluster I use is based on a network of quad-opterons (mpich/Linux) and I can set both the number of nodes and CPUs when I schedule a job; if there is a lot of communication overhead the configuration can have a huge impact on the speed.
 
Originally posted by: CTho9305
Given that the dual-core CPUs have shared memory, if you could, it would probably be most efficient to count each package (rather than each core) as a node so you don't pay the overhead for passing messages when you could just use the shared memory system.

I haven't followed the dual-core news much - I expect processor affinity will become even more important. But again, I suspect it will happen at the OS level.

Probably your best bet would be to multithread the jobs on each node to take advantage of the multi-core architecture, then just start one job on each node. So within a node you can use shared memory, and only need message passing between nodes.

That's what I'm doing on our dual-xeon cluster - though I actually start two jobs/node because some non-threadsafe code I have to use bottlenecks it otherwise.
 
Back
Top