Anybody going to Sandybridge for crunching?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ZipSpeed

Golden Member
Aug 13, 2007
1,302
169
106
I took a look at your board and looks like the last slot is 4x. Not sure how much this will affect folding performance but I know with gaming there be a performance hit for sure.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,560
14,513
136
Anyone know if I could mix a GTX 560 with two GTX 460s for folding?

Well, I don't know... But I had trouble mixing GTX260 192 shaders with 216 shaders, let alone a different chip. I doubt it, but things could have changed.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,341
10,044
126
I mixed some 786MB 9600GSO cards with 384MB 9600GSO cards, but all of those cards had the older 96SP configuration.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I took a look at your board and looks like the last slot is 4x. Not sure how much this will affect folding performance but I know with gaming there be a performance hit for sure.

gaming hit is only about 10% vs 8x, on seti I see no difference with my gtx 260 btwn 16x and 4x, I suspect that folding will be the same.
 

Peter Trend

Senior member
Jan 8, 2009
405
1
0
Well, I don't know... But I had trouble mixing GTX260 192 shaders with 216 shaders, let alone a different chip. I doubt it, but things could have changed.

I mixed some 786MB 9600GSO cards with 384MB 9600GSO cards, but all of those cards had the older 96SP configuration.

I probably won't bother mixing them then. If I get a 560 or two I will sell the 460.

I'm getting 47-51k PPD with an SMP and GPU client. 10-15k for the GTX 460 @840MHz, 35-36k for the 2600k @4.5GHz (1.38V bios / 1.41 CPU-Z).
Power taken at the wall went up to 245W with overclocking GPU/CPU.
~200 PPD/W
 

biodoc

Diamond Member
Dec 29, 2005
6,262
2,238
136
I probably won't bother mixing them then. If I get a 560 or two I will sell the 460.

I'm getting 47-51k PPD with an SMP and GPU client. 10-15k for the GTX 460 @840MHz, 35-36k for the 2600k @4.5GHz (1.38V bios / 1.41 CPU-Z).
Power taken at the wall went up to 245W with overclocking GPU/CPU.
~200 PPD/W

There are posts over at the [H]ard forum that claim if you are running 1 or 2 GPU clients in combination with smp -bigadv, your can reduce your bigadv tpf by using 7 cores instead of 8: smp -7 -bigadv. This is for Win 7 rigs. You'll still get bigadv WUs becasue the SMP app first checks to see how many cores you have, then it applies the switches.

Why/How? : It's not absolutely clear. One possibility is since the latest smp app is multithreaded so it requires "perfect" load balancing of the threads between all 8 cores to maximize performance. So smp -8 -bigadv gives the best tpf's on a dedicated cruncher. If you run 1 or 2 gpu clients on Win 7 rigs they will grab between 6 and 12% (or more) of one thread and that throws off the smp "load balancing".

As MarkFW pointed out, this is not the case with WinXP.

During the Holiday Race, I found this to be true with the linux smp client too. I was running 2 x GPU and one SMP client on linux (6 core 1090T) and my maximum PPD was when I ran SMP -5.
 

Sunny129

Diamond Member
Nov 14, 2000
4,823
6
81
I took a look at your board and looks like the last slot is 4x. Not sure how much this will affect folding performance but I know with gaming there be a performance hit for sure.
gaming hit is only about 10% vs 8x, on seti I see no difference with my gtx 260 btwn 16x and 4x, I suspect that folding will be the same.
i know this is slightly OT, but since you're on the subject, i have a related question regarding GPU performance. it appears that we crunchers typically base our GPU purchasing decisions on benchmarks that are geared entirely toward gamers and other types of users, not crunchers. and this is hardly our fault, seeing as how most video card reviews/comparisons/roundups on the internet contain mostly gaming and multimedia benchmarks. i've never seen a DC benchmark in any video card review before...heck, even in broader reviews (for example full system reviews, CPU performance reviews, etc.) i only see DC benchmarks once in a blue moon.

anyways, here's what i'm getting at - can we actually base our GPU choices (for crunching purposes) on the typical video card/GPU review with any sort of accuracy? and if so, how accurately do performance differences between cards and performance changes between overclocks in gaming/multimedia apps translate to the DC world?
 

ZipSpeed

Golden Member
Aug 13, 2007
1,302
169
106
i know this is slightly OT, but since you're on the subject, i have a related question regarding GPU performance. it appears that we crunchers typically base our GPU purchasing decisions on benchmarks that are geared entirely toward gamers and other types of users, not crunchers. and this is hardly our fault, seeing as how most video card reviews/comparisons/roundups on the internet contain mostly gaming and multimedia benchmarks. i've never seen a DC benchmark in any video card review before...heck, even in broader reviews (for example full system reviews, CPU performance reviews, etc.) i only see DC benchmarks once in a blue moon.

anyways, here's what i'm getting at - can we actually base our GPU choices (for crunching purposes) on the typical video card/GPU review with any sort of accuracy? and if so, how accurately do performance differences between cards and performance changes between overclocks in gaming/multimedia apps translate to the DC world?

Hardware Canucks is the only site known to me that includes F@H in all their testing. For example, here's the latest data for the 560 review:

http://www.hardwarecanucks.com/foru...-nvidia-geforce-gtx-560-ti-1gb-review-17.html

I think we can correlate gaming performance with crunching performance. I can't speak for other GPGPU projects but Folding seems to see some nice benefits with a higher shader clock. And like gaming, the more CUDA cores, the better the performance.
 

GLeeM

Elite Member
Apr 2, 2004
7,199
128
106
There are posts over at the [H]ard forum that claim if you are running 1 or 2 GPU clients in combination with smp -bigadv, your can reduce your bigadv tpf by using 7 cores instead of 8: smp -7 -bigadv. This is for Win 7 rigs. You'll still get bigadv WUs becasue the SMP app first checks to see how many cores you have, then it applies the switches.

Why/How? : It's not absolutely clear. One possibility is since the latest smp app is multithreaded so it requires "perfect" load balancing of the threads between all 8 cores to maximize performance. So smp -8 -bigadv gives the best tpf's on a dedicated cruncher. If you run 1 or 2 gpu clients on Win 7 rigs they will grab between 6 and 12% (or more) of one thread and that throws off the smp "load balancing".

As MarkFW pointed out, this is not the case with WinXP.

During the Holiday Race, I found this to be true with the linux smp client too. I was running 2 x GPU and one SMP client on linux (6 core 1090T) and my maximum PPD was when I ran SMP -5.

Yup, thanks biodoc for bringing this up.

My GPU shows ~2-3% usage in Task Manager and adds about 4 minutes per percent of bigadv.

One time I finished a uniprocessor WU that took about an hour while an SMP bigadv was working. I thought it would only slow the bigadv by a little bit ... lol it took three times as long to finish one percent - the bigadv didn't get anything done during that hour!

I think I will try the -smp 7 trick.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,341
10,044
126
I probably won't bother mixing them then. If I get a 560 or two I will sell the 460.

I'm getting 47-51k PPD with an SMP and GPU client. 10-15k for the GTX 460 @840MHz, 35-36k for the 2600k @4.5GHz (1.38V bios / 1.41 CPU-Z).
Power taken at the wall went up to 245W with overclocking GPU/CPU.
~200 PPD/W

How does your GTX460 @ 840 reach 15K PPD? Mine is consistently around 11,300 PPD. Mine's at 820. Are you running -advmethods? Is that the difference? I'm not.
 

Peter Trend

Senior member
Jan 8, 2009
405
1
0
How does your GTX460 @ 840 reach 15K PPD? Mine is consistently around 11,300 PPD. Mine's at 820. Are you running -advmethods? Is that the difference? I'm not.

I'm not running -advmethods, but I only got 15k with certain WUs... P10942 & P10949 got 53sec/frame giving 15,079 PPD. P11227 and P11254 also got 53sec/frame giving 14,867 PPD.

Usually I get 11,300 - 11,500PPD (crunching 6801, 6805, 6806).
 

Peter Trend

Senior member
Jan 8, 2009
405
1
0
Yup, thanks biodoc for bringing this up.

My GPU shows ~2-3% usage in Task Manager and adds about 4 minutes per percent of bigadv.

One time I finished a uniprocessor WU that took about an hour while an SMP bigadv was working. I thought it would only slow the bigadv by a little bit ... lol it took three times as long to finish one percent - the bigadv didn't get anything done during that hour!

I think I will try the -smp 7 trick.

One thing I forgot to factor in - when you stop and start the client to test -smp 7 vs -smp 8, you lose a lot of points!

In any case, running -smp 7 seems to make total PPD worse for me. TPF went from ~29/30 min to ~33/34 min (Running Windows 7 x64).

Maybe more useful for people running more than one GPU.
 

biodoc

Diamond Member
Dec 29, 2005
6,262
2,238
136
One thing I forgot to factor in - when you stop and start the client to test -smp 7 vs -smp 8, you lose a lot of points!

In any case, running -smp 7 seems to make total PPD worse for me. TPF went from ~29/30 min to ~33/34 min (Running Windows 7 x64).

Maybe more useful for people running more than one GPU.

Bummer. I guess I'll stop reading the [H]ard forum.

It'd be nice to have an 8 core chip so I can test this stuff before posting. ;)
 

Pokey

Platinum Member
Oct 20, 1999
2,766
457
126
Bummer. I guess I'll stop reading the [H]ard forum.

It'd be nice to have an 8 core chip so I can test this stuff before posting. ;)

There is contradictory info out there. Now that Fermi’s are available, I found one blurb that if you are running multi fermi’s you should reserve one full core for the GPU’s, but that didn't work for me. I have also seen recommendations to just run them all. My experience has been that it is best for me to simply run –smp (all 8 cores) and all GPU’s.

I guess the old saying “your mileage may vary” is still true.
 

Peter Trend

Senior member
Jan 8, 2009
405
1
0
*snip*
Looks like the P67 chipsets are defective.

WHAT? D:
Wait, does this matter for folding? I mean, how badly damaged are they? :\

Edit: Sorry, I didn't read the second Anandtech article. Okay, so we should try to use SATA as little as possible or they may fail within 3 years...but it's RMA-able, so not too bad, I guess.

Anandtech said:
The problem in the chipset was traced back to a transistor in the 3Gbps PLL clocking tree. The aforementioned transistor has a very thin gate oxide, which allows you to turn it on with a very low voltage. Unfortunately in this case Intel biased the transistor with too high of a voltage, resulting in higher than expected leakage current. Depending on the physical characteristics of the transistor the leakage current here can increase over time which can ultimately result in this failure on the 3Gbps ports.

You can coax the problem out earlier by testing the PCH at increased voltage and temperature levels.
By increasing one or both of these values you can simulate load over time and that’s how the problem was initially discovered. Intel believes that any current issues users have with SATA performance/compatibility/reliability are likely unrelated to the hardware bug.

The fact that the 3Gbps and 6Gbps circuits have their own independent clocking trees is what ensures that this problem is limited to only ports 2 - 5 off the controller.
http://www.anandtech.com/show/4143/the-source-of-intels-cougar-point-sata-bug

I think we might need to watch the PCH and PLL voltages, if anybody is pushing them high for OCing they might want to back off a little. But I wouldn't panic if everything is stable, unless I desperately needed ports 2-5.
 
Last edited:

biodoc

Diamond Member
Dec 29, 2005
6,262
2,238
136
This last weekend I built a Sandy Bridge system :)

64-bit Ubuntu 10.04 LTS
Core i7 2600K
Gigabyte GA-P67A-UD4-B3
Corsair Vengeance 2X4GB RAM, DDR3-1600, 1.5 volts, 9-9-9-24
Corsair hydro H70 cpu cooler
Antec 300 case

It seems to be stable @ 4.2 GHz for now. I'll try to push it later.

I'll do some testing on F@H after the primegrid race. I won't be doing any GPU crunching on it in the near future.
 

Bradtech519

Senior member
Jul 6, 2010
520
47
91
This last weekend I built a Sandy Bridge system :)

64-bit Ubuntu 10.04 LTS
Core i7 2600K
Gigabyte GA-P67A-UD4-B3
Corsair Vengeance 2X4GB RAM, DDR3-1600, 1.5 volts, 9-9-9-24
Corsair hydro H70 cpu cooler
Antec 300 case

It seems to be stable @ 4.2 GHz for now. I'll try to push it later.

I'll do some testing on F@H after the primegrid race. I won't be doing any GPU crunching on it in the near future.

Should scream.. Lately I've been using a Xeon Dell Precision rig.
 

Lightflash

Senior member
Oct 12, 2010
274
0
71
With my new SB 2600k @ 4.6GHz right now I am getting around 47k PPD with F@H Tracker v2 running a 6901 with around 25min. TPF. Should be done tonight and I will post the run.

I have heard it was best to just not run my GTX 580 with it and I was only gaining around 6k PPD if running both together at using 8 core and a gpu. Going to see if I can change to 7 core and gpu, but it will probably kill the PPD of the cpu right now.
 

theAnimal

Diamond Member
Mar 18, 2003
3,828
23
76
With my new SB 2600k @ 4.6GHz right now I am getting around 47k PPD with F@H Tracker v2 running a 6901 with around 25min. TPF. Should be done tonight and I will post the run.

Run Linux on there and you should be nearer 60k.
 

Lightflash

Senior member
Oct 12, 2010
274
0
71
Run Linux on there and you should be nearer 60k.

I was thinking about it honestly. Just have never used Linux to be honest. How would you suggest I go about it? Or is there a guide here on how to set it up.

I want to be able to switch back and forth between Win7 and Linux easily as I like to game, but I usually do F@H whenever I can to help with the different projects.
 
Last edited:

theAnimal

Diamond Member
Mar 18, 2003
3,828
23
76
I was thinking about it honestly. Just have never used Linux to be honest. How would you suggest I go about it? Or is there a guide here on how to set it up.

I want to be able to switch back and forth between Win7 and Linux easily as I like to game, but I usually do F@H whenever I can to help with the different projects.

This is the easiest way:

http://www.evga.com/forums/tm.aspx?m=4464&mpage=1