Peter Trend
Senior member
- Jan 8, 2009
- 405
- 1
- 0
Yes as long as you have a motherboard that can accommodate a 3rd card. Preferably 8x PCI-e electrical.
I don't know if I do. It's the Asrock Extreme4. It has enough space between slots though.
Yes as long as you have a motherboard that can accommodate a 3rd card. Preferably 8x PCI-e electrical.
Anyone know if I could mix a GTX 560 with two GTX 460s for folding?
I took a look at your board and looks like the last slot is 4x. Not sure how much this will affect folding performance but I know with gaming there be a performance hit for sure.
Well, I don't know... But I had trouble mixing GTX260 192 shaders with 216 shaders, let alone a different chip. I doubt it, but things could have changed.
I mixed some 786MB 9600GSO cards with 384MB 9600GSO cards, but all of those cards had the older 96SP configuration.
I probably won't bother mixing them then. If I get a 560 or two I will sell the 460.
I'm getting 47-51k PPD with an SMP and GPU client. 10-15k for the GTX 460 @840MHz, 35-36k for the 2600k @4.5GHz (1.38V bios / 1.41 CPU-Z).
Power taken at the wall went up to 245W with overclocking GPU/CPU.
~200 PPD/W
gaming hit is only about 10% vs 8x, on seti I see no difference with my gtx 260 btwn 16x and 4x, I suspect that folding will be the same.
i know this is slightly OT, but since you're on the subject, i have a related question regarding GPU performance. it appears that we crunchers typically base our GPU purchasing decisions on benchmarks that are geared entirely toward gamers and other types of users, not crunchers. and this is hardly our fault, seeing as how most video card reviews/comparisons/roundups on the internet contain mostly gaming and multimedia benchmarks. i've never seen a DC benchmark in any video card review before...heck, even in broader reviews (for example full system reviews, CPU performance reviews, etc.) i only see DC benchmarks once in a blue moon.gaming hit is only about 10% vs 8x, on seti I see no difference with my gtx 260 btwn 16x and 4x, I suspect that folding will be the same.I took a look at your board and looks like the last slot is 4x. Not sure how much this will affect folding performance but I know with gaming there be a performance hit for sure.
i know this is slightly OT, but since you're on the subject, i have a related question regarding GPU performance. it appears that we crunchers typically base our GPU purchasing decisions on benchmarks that are geared entirely toward gamers and other types of users, not crunchers. and this is hardly our fault, seeing as how most video card reviews/comparisons/roundups on the internet contain mostly gaming and multimedia benchmarks. i've never seen a DC benchmark in any video card review before...heck, even in broader reviews (for example full system reviews, CPU performance reviews, etc.) i only see DC benchmarks once in a blue moon.
anyways, here's what i'm getting at - can we actually base our GPU choices (for crunching purposes) on the typical video card/GPU review with any sort of accuracy? and if so, how accurately do performance differences between cards and performance changes between overclocks in gaming/multimedia apps translate to the DC world?
There are posts over at the [H]ard forum that claim if you are running 1 or 2 GPU clients in combination with smp -bigadv, your can reduce your bigadv tpf by using 7 cores instead of 8: smp -7 -bigadv. This is for Win 7 rigs. You'll still get bigadv WUs becasue the SMP app first checks to see how many cores you have, then it applies the switches.
Why/How? : It's not absolutely clear. One possibility is since the latest smp app is multithreaded so it requires "perfect" load balancing of the threads between all 8 cores to maximize performance. So smp -8 -bigadv gives the best tpf's on a dedicated cruncher. If you run 1 or 2 gpu clients on Win 7 rigs they will grab between 6 and 12% (or more) of one thread and that throws off the smp "load balancing".
As MarkFW pointed out, this is not the case with WinXP.
During the Holiday Race, I found this to be true with the linux smp client too. I was running 2 x GPU and one SMP client on linux (6 core 1090T) and my maximum PPD was when I ran SMP -5.
I probably won't bother mixing them then. If I get a 560 or two I will sell the 460.
I'm getting 47-51k PPD with an SMP and GPU client. 10-15k for the GTX 460 @840MHz, 35-36k for the 2600k @4.5GHz (1.38V bios / 1.41 CPU-Z).
Power taken at the wall went up to 245W with overclocking GPU/CPU.
~200 PPD/W
How does your GTX460 @ 840 reach 15K PPD? Mine is consistently around 11,300 PPD. Mine's at 820. Are you running -advmethods? Is that the difference? I'm not.
Yup, thanks biodoc for bringing this up.
My GPU shows ~2-3% usage in Task Manager and adds about 4 minutes per percent of bigadv.
One time I finished a uniprocessor WU that took about an hour while an SMP bigadv was working. I thought it would only slow the bigadv by a little bit ... lol it took three times as long to finish one percent - the bigadv didn't get anything done during that hour!
I think I will try the -smp 7 trick.
One thing I forgot to factor in - when you stop and start the client to test -smp 7 vs -smp 8, you lose a lot of points!
In any case, running -smp 7 seems to make total PPD worse for me. TPF went from ~29/30 min to ~33/34 min (Running Windows 7 x64).
Maybe more useful for people running more than one GPU.
Bummer. I guess I'll stop reading the [H]ard forum.
It'd be nice to have an 8 core chip so I can test this stuff before posting.
*snip*
Looks like the P67 chipsets are defective.
Anandtech said:The problem in the chipset was traced back to a transistor in the 3Gbps PLL clocking tree. The aforementioned transistor has a very thin gate oxide, which allows you to turn it on with a very low voltage. Unfortunately in this case Intel biased the transistor with too high of a voltage, resulting in higher than expected leakage current. Depending on the physical characteristics of the transistor the leakage current here can increase over time which can ultimately result in this failure on the 3Gbps ports.
You can coax the problem out earlier by testing the PCH at increased voltage and temperature levels. By increasing one or both of these values you can simulate load over time and that’s how the problem was initially discovered. Intel believes that any current issues users have with SATA performance/compatibility/reliability are likely unrelated to the hardware bug.
The fact that the 3Gbps and 6Gbps circuits have their own independent clocking trees is what ensures that this problem is limited to only ports 2 - 5 off the controller.
http://www.anandtech.com/show/4143/the-source-of-intels-cougar-point-sata-bug
Ditto.My experience has been that it is best for me to simply run smp (all 8 cores) and all GPUs.
Also true. I plan to try -smp 7 again when I get my second GTX 460 repaired/replaced.I guess the old saying your mileage may vary is still true.
This last weekend I built a Sandy Bridge system
64-bit Ubuntu 10.04 LTS
Core i7 2600K
Gigabyte GA-P67A-UD4-B3
Corsair Vengeance 2X4GB RAM, DDR3-1600, 1.5 volts, 9-9-9-24
Corsair hydro H70 cpu cooler
Antec 300 case
It seems to be stable @ 4.2 GHz for now. I'll try to push it later.
I'll do some testing on F@H after the primegrid race. I won't be doing any GPU crunching on it in the near future.
With my new SB 2600k @ 4.6GHz right now I am getting around 47k PPD with F@H Tracker v2 running a 6901 with around 25min. TPF. Should be done tonight and I will post the run.
Run Linux on there and you should be nearer 60k.
I was thinking about it honestly. Just have never used Linux to be honest. How would you suggest I go about it? Or is there a guide here on how to set it up.
I want to be able to switch back and forth between Win7 and Linux easily as I like to game, but I usually do F@H whenever I can to help with the different projects.