highend rendering on 64 bit; multi cpu (2+)

deadalvs

Member
Dec 2, 2006
44
0
0

hey all.

i'm the «new» guy here... (oh no! i know...)

* * *

i'm doing a lot of 3d rendering and i am searching for a speedy machine. all of my renderings, i do with maxwell render:
http://www.maxwellrender.com/
maxwell will soon arrive native 64 bit for windows and linux.

* * *

now. i have seen this machine here:
the T-650QX from TYAN
http://www.tyanpsc.com/Products/tabid/63/Default.aspx


i have heard that win XP 64 bit can only support two sockets...
-->
my questions are pretty technical.
- which 64 bit OS can support all available cpus built in a system like the above tyan monster? can i use a normal linux or do i have to use a special version of windows like windows server 2003 64 bit to run my software?
- of course, maxwell render itself is programmed to be able to detect all cpus in a system, but i need to know about the OS itself.
- where could be the bottlenecks?
- in such a large system, how is the load divided up and sent to each processor? will one processor be defined to «feed» all others?

* * *

thanks a lot...

* * *

deadalvs



 

MagnusTheBrewer

IN MEMORIAM
Jun 19, 2004
24,122
1,594
126
As the Tyan site states, the T-650QX is MS Server 2003 and Linux ready. In general, one core parses the workload to all others. You would only need third party software to balance work loads if you were going to combine this machine with others. It's your preference but, I say go with Server 2003.

Here is a good basic overview of digital animation, sizing, business model and, infrastructure. http://www-128.ibm.com/developerworks/web/library/wa-animstudio1/

The system you mentioned is probably overkill although you haven't stated exactly what you intend to do with it. Welcome to the forums and tell us what your plans are. :)
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
- which 64 bit OS can support all available cpus built in a system like the above tyan monster? can i use a normal linux or do i have to use a special version of windows like windows server 2003 64 bit to run my software?

It says you need Win2K3 Cluster Server if you want to use Windows. Linux has no artificial limitaions so it'll take as many CPUs as you give it, but I've never used a machine that large and they don't say exactly how everything is connected. I believe all AMD64 versions of Linux have the NUMA and SMP options already enabled so if it'll work it should work out of the box.

You'd probably be better off calling them and asking them for more details, with as much as the thing probably costs I'm sure they'll be more than happy to work everything out with you to get you to buy it.

- of course, maxwell render itself is programmed to be able to detect all cpus in a system, but i need to know about the OS itself.

It looks like CONFIG_NR_CPUS defaults to 32 on AMD64 so you will likely have to compile a custom kernel to use more cores, although on a distribution like Debian or Ubuntu that's not very difficult.

- where could be the bottlenecks?

That would be more of a hardware question I would think.

- in such a large system, how is the load divided up and sent to each processor? will one processor be defined to «feed» all others?

That would be determined by the software doing the work. The Maxwell stuff will be managing all of the data and rendering on it's own, the OS will just allocate memory and CPU time as requested. And unless you set each process's affinity (which I would recommend against at least at the start) they'll all be considered equal (unless there's NUMA giving them different latencies to different sections of memory) and will run on any available CPU.
 

deadalvs

Member
Dec 2, 2006
44
0
0

thanks guys.

actually, this machine's a little too much for me - i am thinking about investing in two nodes instead of five, but this makes me ask the same questions since there would be four XEON cpus. thus, i would also stay under the 32 cpu number (16 cores).

this system would be «just» a modelling and rendering machine, of course if i used a linux and would get deeper into it, i'd try and find some other apps.
i actually am a mac user and use windows just because this application, maxwell, is just more advanced on windows, yet. i hope this will change.
but my interests are going towards linux anyway.

* * *

i study architecture at the ETH in zurich and i will finish just next year, so maybe i'm gonna make myself busy with a new small graphics office and since linux offers a lot of free software this would be cool.

* * *

i just took note of something cool which would just perfectly fit: the «Atoka» mainboards from http://www.supermicro.com/
i heard they develop a 1u server with two boards with each two XEON 5300 capable sockets 771 that can cooperate.
but since i would just need a normal computer, i'd build it around these boards.

* * *

thanks for the link, Nothinman. this is very interesting! but since for architecture, i need only to create single pictures and no animation, i could leave all details away and stick with the renderfarm :)

* * *

so You guys think that the mentioned 16 core machine could be cool? i wrote two mails to 1) tyan and 2) another reseller of such hardware to get to know the details.

i'll let You know.

* * *

my portfolio: http://www.jumpgates.com/matthias/portfolio/

* * *

deadalvs

 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
That high-end Tyan machine is basically a readymade cluster in a box.

In its highest configuration it is 5 'nodes' (each node is essentially a seperate server), each of which is a dual socket (dual or quad core Xeon). A super-fast (10 Gigabit) LAN then connects the individual nodes together.

The problem with this configuration is basically that you've got 5 seperate computers - albeit in the same box. Although the system isn't designed for it - in theory, you could simply use it as 5 seperate servers.

However, ideally, you would run a cluster OS on this system - this way, the OS will automatically divide the work up among the different nodes. One node is in control, and will ensure that as many nodes as possible have work to do, and collect the results back from the other nodes.

In terms of running on such a system, you need an OS that supports clustering. Win XP and Vista do not support clustering at all. Your only option is Windows 2003 Cluster Server.

Some versions of Linux do have support for clustering, and this could be an alternative.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
There is 'clustering support' and there is 'clustering support'.

Linux systems support a wide veriaty of clustering mechanisms and there are many different types of clusters. Load balacning clusters, failover clusters, hight performance clustering. Anything which you can think of that can possibly benifit from multiple computers. Generally speaking Linux is the clustering operating system de jure. It is what you will find when you go to movie studios. It's what NASA uses, it's what you use when you want a super computer (in the Top500.org list of most powerfull computers Linux runs 70%+ of them, Windows runs 0.). It is what you'd use for drug companies, it's what Google uses, etc etc.

With movie studios this is mostly because:
A. It's very customizable. Most big places have their own programming staff.
B. They don't have to reveal their 'IP' to their competitors. It's all open source so when they want to make modifications to the kernel or other systems they don't have to work with other companies like they would if they depended on Microsoft and such. It enables them to keep more secret, ironicly.
C. It's cheap and a nice upgrade from legacy SGI boxes that used to be common.

They even use it as workstations and desktops, although of course Windows is fairly common also. I don't know about Macs.

However, of course, none of those reasons are probably very helpfull to you. Probably best bet is to contact them and find out their recommendations for Linux software if you want to use that.

Generally speaking most closed-source applications require Redhat for support compatability. Redhat support is a bit expensive, but it's price includes tech support with phone calls and such depending on how much you want to spend (like 24/7 support vs 9-5 weekdays for phone tech.) If you don't want to use Redhat and your application requires it then the standard way to work around that without spending a lot of money would be to use CentOS. CentOS is a operating system that takes Redhat's source code from it's ftp servers and uses that to create as a faithfull replication of Redhat as they could manage and it should be very compatable, binary software-wise.

If they require specific versions of Redhat.. Like some software may require Redhat ES 3 for support then its going to restrict you somewhat on your hardware choices.

So the decision proccess would go.. Determine application requirements for supporting software and hardware resources. Use those software requirements for picking out a OS.. Then use application and OS requirements for picking out hardware.

If there is no requirements for the operating system, then pick whatever your most comfortable with. Suse, Redhat/Fedora, and Debian are going to be the most common systems used for these tasks.

Redhat has clustering add-ons for it's system. Most of that revolves around file system support.
For instance if you want to run a database with shared storage (for instance on a iSCSI box running Linux or a SAN (storage array network)) that is accessed from many multiple Redhat servers simultaniously then you would want to look at Redhat's clustering services. They incldue support for things like GFS2 (Global File System) which includes a clustering file system, distributed file locking mechanism, clustering logical volume management, I/O fence management and other features nessicary for shared storage. In otherwords they focus on the needs for enterprise computing, which is probably not to relevent for what your doing. Although shared storage may help with workflow and performance if your dealing with a dozen or so workstations and such.

For high performance clustering for scientific stuff or massive number crunching generally what they use, from my limited understanding, is a stripped-down Linux system. Just basic kernel, file system utilities, and such things. Nodes would consist of mostly a couple processors, some ram, and some disk space. They would boot off of the network with no OS realy installed on them perminately. For the Linux side of things they would use special management software for monitoring the status of various nodes. And then maybe some batch proccessing software for sending various proccessing jobs to various nodes and monitoring job status. The cluster and software has to be flexible and fluid as you will have people adding capacity or removing computers from teh cluster for repair, so you will have bits and peices of it winking in and out of existance. The software they would run on it would be custom programmed (a lot of Fotran) for that cluster and would use message passing libraries such as MPI for doing the actual distributing. This what is commonly called a 'Beowolf Cluster'.

As per my limited understanding for rending nodes using commercial software the clustering support for the OS is nothing special. You install Linux on the 'slave nodes', then the rendering software designed for clustering. Then on your workstation, or 'master node' you woudl have a a program that you interact with that does the job control and dispatches information over to the various slave nodes and takes care of load balancing and other details. In this case the 'clustering support' required by Linux is very minimal, mostly it would be management software for booting the system off the network or hardware monitoring or something like that just to make your live easier. Otherwise people do things like use idle workstations as their rendering cluster. For instance you may have a dozen machines people use during the daytime, but you leave them on overnight for rendering scenes. The rendering software would take care of the 'clustering' portion of it. Of course in situations like that Windows could be used just as easily.

Not that I have a lot of experiance with it, it's just my understanding of the situation.

There are a veriaty of companies that produce hardware specificly designed for clustering. For instance Linux dominates what is known as 'blade servers'. These are machines that are very minimalistic and you can pack them into a small amount of space. Essentially what you have is a large box that fits on a rack, in it you can plug in many numerious blades which consists of a motherboard, ram, cpu, and a disk. The large housing box takes care of the power supplies and network I/O and maybe shared storage I/O. Usually very modular so you can use the space up on things like redundant power supplies or additional network interfaces. That way you can pack much more proccessing power into a smaller space and do 'hotplug' servers which you can add additional hardware to without having to take the machines down or add more network taps and such.

One example of a company like that is penguin computing. http://www.penguincomputing.com/
For example they have their 'Blade Runner 4130 / 4140' machines. So you have a single 4u chassis, which is standard sized for a rackmount computer, that can contain up to twelve blade servers. Each blade server can handle up to 8 gigs worth of RAM and up to 4 AMD64 cores (two dual core cpus). So in the size of a standard workstation you can have 96 gigs of RAM and 48 cpu cores. This comes with management software and support and all that for making it fairly plug-n-play arrangement.

Of course something like that is probably far beyond your budget, but they ahve other stuff that may be interesting and there are a few other big companies like that that do clustering pretty well.

For better advice on this sort of thing, instead of my random ramblings, maybe some place like this mailing list would be informative:
http://linuxmovies.org/mailing.list.html Although unfortunately it looks like activity had died off somewhat.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Oh if you want some stuff to play around with check out http://64studio.com/
It's a 64bit Debian-based distribution specificly designed for studio use with related 'Free Software'. It's fairly new, but may be fun.

Free software for 3d is fairly decent with Blender, which is a 3d animation and modelling suite. It has a reputation for being very odd as it doesn't realy conform well with stuff like Maya and 3dmax in the UI department, but it is fast and capable. Comes with built-in support for Yafray, which is a raytracing render.(clustering support is weak) Of course Blender has it's own built-in renderer.

For audio work Free software is pretty capable with the caveat that for very good audio cards you have to be a bit selective in what you pick, but you can find ones that are nice from very cheap to very high-end. And again it's a bit strange as the setup with Jack audio server and such is unusual compared to what you typically find in Windows or Mac.

For Free software video support.. compositing is pretty decent and transcoding is very strong, however people considure NLE support to be Linux's weakest link in the multimedia department. Kino is aviable for low-end, and Cinelerra is aviable for the higher-up, but its considured a bit ackward and traditionally suffered from stability issues (but that seems to have improved as of late).

Of course it's mostly unrelated to your topic, but I thought you may want something to play around with a bit to see some more of what is aviable to you.
 

deadalvs

Member
Dec 2, 2006
44
0
0

i did some research about cpu prices and it's a bummer...

why the hell is every system so damn overpriced ??

* * *

why should a user pay the double when only getting 20% more speed?
i mean, the DELL precision 690 with two quad 1.86 GHz cpus costs nearly two thirds more than a standard quad (2x2 cores) system and the bottom line is that they have exactly the same benchmarks... it's so frustrating !

* * *

deadalvs