Here's the skinny on pthreads vs OpenMP:
Pthreads use explicit thread creation and destruction, and you explicitly manage your stack frames in Pthreads (that is, every thread starts in its own function). Its pretty raw.
OpenMP is an interface, ususally a wrapper around Pthreads, that hides the thread creation, destruction, and management. Mostly, OpenMP allows you to decorate loops with #pragmas to automagically make them parallel. Not correct necessarily, but parallel. Both interfaces have the 'thread' abstraction, and both use shared memory for communication.
I gave a seminar on the OpenMP interface in 2006. You can find my slides here:
http://pages.cs.wisc.edu/~gibson/talks/openmp.ppt
One thing I forgot to mention is Intel's TBB (threadingbuildingblocks.org/), which is everything including the kitchen sink as far as x86 parallel programming is concerned. They also have some handy tutorials. So do I, though my tutorial somewhat assumes that the reader is at UW-Madison:
http://pages.cs.wisc.edu/~gibson/tbbTutorial.html
So long as you can build, you can probably already use pthreads without any new packages. OpenMPI requires compiler support, and I think gcc has supported it for awhile but I'm not sure. I have always used Sun Studio when writing OpenMP codes.
Regarding Python options, there are also experimental python interpreters that do allow multiple threads (PyPy? I don't recall which, but I've seen them in at least two projects). The problem with python is that its interpreted, and hence is slower than native, and hence you're already giving up performance when performance is usually the only reason to parallelize in the first place (here, I assume you're not using P2C).