• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

3D acceleration over network

Mark R

Diamond Member
We're installing a system like this at work for viewing 3D data.

The problem we've found is that even 2GB Xeon/Quadro workstations bog down with the large datasets we use, and more importantly having to stream a 1-2 GB data file from server to workstation just to start analysis is a problem.

I don't know much about the system, but from what I understand, is that there is a server which holds the data on local disks. It's equipped with a powerful 3D accelerator with something like 4 GB of RAM. A 'thin-client' program on the PCs can then be used to request views of the data which are then rendered by the server.

This seemed a bit of a strange idea at first, but it makes sense. Anyone else heard of such a system being used?
 
Hmmm. The first thing coming to my mind was maybe just plunk an external SCSI enclosure next to that server and have some hot-swap trays where you simply copy the data to a drive, shut the drive down, pull it out, and sneakernet it to the workstation. Lowbrow, but hey. 🙂

Presumably you use an all-gigabit network already. I can't claim any firsthand depth in optomizing Gb networks, but if everything supports jumbo frames, then try enabling that option if it isn't already. Also consider some NICs and a switch that support link aggregation. You could plunk a quad-port Pro/1000MT into the server and workstations, and aggregate the four ports into one 4Gbit pipeline. Those look to be about US$400-$500 a crack. There are dual-port models too.

I can't envision a single mondo-powerful server outperforming a bunch of individual workstations if more than one person will be hitting it at a time. What about building or buying better workstations, and making the greenhorn new hires suffer with the old ones? 🙂
 
What your looking for sounds like a cross between a few different things

Fibre based SAN for the data
Render farm with multiple servers sharing the load

What this will do is store all of the data in one place and since the farm is doing the rendering the local workstation doesn't have too

Depending on the software you are using some have free network renders and some are more money

Could probably also do this without the SAN but with a larger main server node that holds the data
That would probably depend on how many different projects are being worked on at the sametime
 
After rereading it for the third time 😛

Sounds almost like a cross between a render farm and terminal server

The server does the rendering and terminal server is used to interact/view what the server is doing

What software is it using?
 
I think this is the system we're getting - TeraRecon. But I'm not at all involved with the installation of it, so I'm not really sure - just overheard the name a few times.

Presumably you use an all-gigabit network already.
Some of it is gigabit - but part of the problem is that we're on multiple campuses, so heavily reliant on WAN links between them.

Oh, and LOL at the sneakernet idea - Believe me, I can't think of a less suitable approach 🙂
 
Back
Top