There is 'clustering support' and there is 'clustering support'.
Linux systems support a wide veriaty of clustering mechanisms and there are many different types of clusters. Load balacning clusters, failover clusters, hight performance clustering. Anything which you can think of that can possibly benifit from multiple computers. Generally speaking Linux is the clustering operating system de jure. It is what you will find when you go to movie studios. It's what NASA uses, it's what you use when you want a super computer (in the Top500.org list of most powerfull computers Linux runs 70%+ of them, Windows runs 0.). It is what you'd use for drug companies, it's what Google uses, etc etc.
With movie studios this is mostly because:
A. It's very customizable. Most big places have their own programming staff.
B. They don't have to reveal their 'IP' to their competitors. It's all open source so when they want to make modifications to the kernel or other systems they don't have to work with other companies like they would if they depended on Microsoft and such. It enables them to keep more secret, ironicly.
C. It's cheap and a nice upgrade from legacy SGI boxes that used to be common.
They even use it as workstations and desktops, although of course Windows is fairly common also. I don't know about Macs.
However, of course, none of those reasons are probably very helpfull to you. Probably best bet is to contact them and find out their recommendations for Linux software if you want to use that.
Generally speaking most closed-source applications require Redhat for support compatability. Redhat support is a bit expensive, but it's price includes tech support with phone calls and such depending on how much you want to spend (like 24/7 support vs 9-5 weekdays for phone tech.) If you don't want to use Redhat and your application requires it then the standard way to work around that without spending a lot of money would be to use CentOS. CentOS is a operating system that takes Redhat's source code from it's ftp servers and uses that to create as a faithfull replication of Redhat as they could manage and it should be very compatable, binary software-wise.
If they require specific versions of Redhat.. Like some software may require Redhat ES 3 for support then its going to restrict you somewhat on your hardware choices.
So the decision proccess would go.. Determine application requirements for supporting software and hardware resources. Use those software requirements for picking out a OS.. Then use application and OS requirements for picking out hardware.
If there is no requirements for the operating system, then pick whatever your most comfortable with. Suse, Redhat/Fedora, and Debian are going to be the most common systems used for these tasks.
Redhat has clustering add-ons for it's system. Most of that revolves around file system support.
For instance if you want to run a database with shared storage (for instance on a iSCSI box running Linux or a SAN (storage array network)) that is accessed from many multiple Redhat servers simultaniously then you would want to look at Redhat's clustering services. They incldue support for things like GFS2 (Global File System) which includes a clustering file system, distributed file locking mechanism, clustering logical volume management, I/O fence management and other features nessicary for shared storage. In otherwords they focus on the needs for enterprise computing, which is probably not to relevent for what your doing. Although shared storage may help with workflow and performance if your dealing with a dozen or so workstations and such.
For high performance clustering for scientific stuff or massive number crunching generally what they use, from my limited understanding, is a stripped-down Linux system. Just basic kernel, file system utilities, and such things. Nodes would consist of mostly a couple processors, some ram, and some disk space. They would boot off of the network with no OS realy installed on them perminately. For the Linux side of things they would use special management software for monitoring the status of various nodes. And then maybe some batch proccessing software for sending various proccessing jobs to various nodes and monitoring job status. The cluster and software has to be flexible and fluid as you will have people adding capacity or removing computers from teh cluster for repair, so you will have bits and peices of it winking in and out of existance. The software they would run on it would be custom programmed (a lot of Fotran) for that cluster and would use message passing libraries such as MPI for doing the actual distributing. This what is commonly called a 'Beowolf Cluster'.
As per my limited understanding for rending nodes using commercial software the clustering support for the OS is nothing special. You install Linux on the 'slave nodes', then the rendering software designed for clustering. Then on your workstation, or 'master node' you woudl have a a program that you interact with that does the job control and dispatches information over to the various slave nodes and takes care of load balancing and other details. In this case the 'clustering support' required by Linux is very minimal, mostly it would be management software for booting the system off the network or hardware monitoring or something like that just to make your live easier. Otherwise people do things like use idle workstations as their rendering cluster. For instance you may have a dozen machines people use during the daytime, but you leave them on overnight for rendering scenes. The rendering software would take care of the 'clustering' portion of it. Of course in situations like that Windows could be used just as easily.
Not that I have a lot of experiance with it, it's just my understanding of the situation.
There are a veriaty of companies that produce hardware specificly designed for clustering. For instance Linux dominates what is known as 'blade servers'. These are machines that are very minimalistic and you can pack them into a small amount of space. Essentially what you have is a large box that fits on a rack, in it you can plug in many numerious blades which consists of a motherboard, ram, cpu, and a disk. The large housing box takes care of the power supplies and network I/O and maybe shared storage I/O. Usually very modular so you can use the space up on things like redundant power supplies or additional network interfaces. That way you can pack much more proccessing power into a smaller space and do 'hotplug' servers which you can add additional hardware to without having to take the machines down or add more network taps and such.
One example of a company like that is penguin computing.
http://www.penguincomputing.com/
For example they have their 'Blade Runner 4130 / 4140' machines. So you have a single 4u chassis, which is standard sized for a rackmount computer, that can contain up to twelve blade servers. Each blade server can handle up to 8 gigs worth of RAM and up to 4 AMD64 cores (two dual core cpus). So in the size of a standard workstation you can have 96 gigs of RAM and 48 cpu cores. This comes with management software and support and all that for making it fairly plug-n-play arrangement.
Of course something like that is probably far beyond your budget, but they ahve other stuff that may be interesting and there are a few other big companies like that that do clustering pretty well.
For better advice on this sort of thing, instead of my random ramblings, maybe some place like this mailing list would be informative:
http://linuxmovies.org/mailing.list.html Although unfortunately it looks like activity had died off somewhat.