Thinking of re-doing my rigs

VirtualLarry

No Lifer
Aug 25, 2001
56,325
10,034
126
The problem is space - I have two (three, but one is behind some junk and can't be accessed right now) desks, each one has a cubby-hole for the computer to sit in, and a shelf (which holds a nice beefy UPS), and I have two 26" LCDs, one on each desk. I haven't installed the speakers yet, but I have three nice speakers sets for each computer too.

I originally built two identical dual-cores, with overclocked E2140s. I plan (eventually) to upgrade those to E5200s. (Have the CPUs already.) These are in CoolerMaster Elite 330 cases, older ones that had seperate wires for the audio jacks. IDE DVD burners, floppy drives, working front-panel USB and audio jacks. Both of these have Antec Basiq 500W PSUs.

I really like those, so I would like to keep them.

I also have some Antec 300 cases, one of which has an X48 DFI mobo, and a Q6600 and Tuniq Tower clocked at 3.6, along with two GTX460s. Another one has an MSI K9A2 Platinum, and four 9600GSOs, with an AMD BE-2350 45W dual-core CPU.

Both of these rigs in the Antec 300 are folding at the moment (quad-GPU rig is powered down due to fan problems).

I also own another X48 DFI mobo, another Q6600, and another Tuniq Tower, and another Antec 300.

I also own four HD4850 cards, and a HD6870.

Here's what I was thinking of doing:
Pull out the four 9600GSOs, and install four HD4850s. Upgrade the BE-2350 with a 1075T hex-core (would use the 1090T and overclock, except that mobo only has a four-pin ATX12V power connector, so overclocking is probably not wise). Upgrade the PSU from EA650 to a Xion 800W that I have NIB. (EA650 is currently using two PCI-E 6-pin dual splitters, and if HD4850 takes 110W, that would be beyond what the EA650 can supply to the PCI-E rail. Xion 800W has four PCI-E plugs.)

Then I would sell that rig on Craigslist as an UBER GAMING RIG. (Six Cores, Four GPUs)

I would then take the Q6600 @ 3.6, install the HD6870, and sell that one as a gaming rig too.

Then I would take the two GTX460 1GB cards, and install those each into one of the E5200 rigs, and overclock the CPUs to at least 3.6. Would run Win7, and then run F@H GPU3 client in the background on each of those rigs.

I would need to replace the PSUs on the dual-cores, so I would pull out the EA650 from the quad-GPU rig (after installing the Xion 800W), and put that into one of the rigs, and I have another EA650 NIB.

I could also install two EA500s each into the dual-cores, as I think those have two PCI-E 6-pin connectors too.

What do you think?
 

brownstone

Golden Member
Oct 18, 2008
1,340
32
91
I think that sounds like a parts warehouse...and a big project!

So you would end up with two folding rigs each with an E5200 and a GTX460? I'm all about condensing if possible, but my reasoning is power consumption. I'm guessing yours might be due to an upcoming inventory? ;)
 

VirtualLarry

No Lifer
Aug 25, 2001
56,325
10,034
126
I can't decide what to do with my F@H quad-GPU rig (9600GSO 96SP, two 768MB and two 384MB). Should I sell it as-is to some DC nut, with the notice that there is some fan rattling inside, or should I pull out all the 9600GSOs, put in HD4850s, and at least a quad-core CPU, and sell it as a gaming rig, or should I pull the 9600GSOs, and swap in single-slot GTS450 cards (when they are available), and swap in new case fan(s), to fix the rattle? (Although I suspect that the rattle is my second 9600GSO, due to a bug in Precision, it hit 105C, and probably fubared the fan bearings because it got so hot.)
 

Philippart

Golden Member
Jul 9, 2006
1,290
0
0
why do you want to add the ati cards, at the same price you will get nvidia cards that are much more powerful in DC (except for milkyway)
 

Rudy Toody

Diamond Member
Sep 30, 2006
4,267
421
126
The other day, I had an urge to re-do my rigs. Fortunately, I was able to lie down, take a nap, and wait for the urge to go away.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,325
10,034
126
I'm still trying to figure this out. More complications - I want to do some development work, and I want to do some virtualization, and I want to be able to do this remotely.

I'm trying to figure out if I want to run WHS on the server, or XP Pro, and whether or not to run Win7 Pro on my desktop rigs.

Part of the problem is, if I set up my two desktop rigs with the P35-DS3R mobos (and hopefully, Q9300s OCed to 3.0) and GTX460 cards, with Win7 Pro, then remoting in with RDP causes F@H to die because it loses access to the GPUs.

Does anyone know if this still happens, with newer NV drivers? Surely, they must have a solution, as I cannot believe that they require a monitor connected to their Tesla rack-mount compute servers. I was just wondering, because supposedly, newer NV drivers no longer require extending the desktop and those dummy plugs to fold under Win7, so I'm wondering if the NV drivers figured out a way to access the graphics cards for CUDA, without involving the desktop graphics subsystems.

Can someone who happens to be running Win7 Pro and has a fermi card with F@H GPU3 client test this, once the race is over?
 

Rudy Toody

Diamond Member
Sep 30, 2006
4,267
421
126
I was just wondering, because supposedly, newer NV drivers no longer require extending the desktop and those dummy plugs to fold under Win7, so I'm wondering if the NV drivers figured out a way to access the graphics cards for CUDA, without involving the desktop graphics subsystems.

I'm running a 9800GT without plugs and without using the desktop graphics, Under Linux64, so it appears that the drivers take care of this problem. However, it should still be verified by a Windows user before you build your rig.

Edit: I queried the GPU state and it is considered a discrete GPU.
 
Last edited: