• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

5th Annual Folding@Home Holiday Season Race thread.

Page 23 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

What should we call the teams?

  • Comets vs Asteroids

  • Snowmen vs Yetis

  • Genomes vs Antibodies

  • Thunder vs Lightning

  • Prancers vs Dashers

  • Team 1 vs Team 2


Results are only viewable after voting.
Just out of curiosity, what kind of ppd are you getting with the 1090T@3.8?
Well, not to answer your question directly, but my 1090T @ 4 ghz gets 7-12k. right now doing a 6051 getting 9840.

With the exception of bigadv units, it beats my I7's
 
Last edited:
Well, not to answer your question directly, but my 1090T @ 4 ghz gets 7-12k. right now doing a 6051 getting 9840.

With the exception of bigadv units, it beats my I7's
Nice, thanks for the info. I didn't think it would be that much. If I have any funds after this holiday season is over, I'm going to replace the X3 I'm currently using. The 1055 is more my budget range, but I am not much of an OC-er and the extra $40-50 for an unlocked 1090 would simplify things considerably. Anyone want to buy an AII X3, only used for a month of folding? 😉 😛
 
Last edited:
I hate those 6701/6702's too! :twisted:

My 1090T w/ new MB is up and running again.

I'm keeping my OC simple this time. I'm only changing 2 parameters: CPU clock ratio (multiplier) and CPU voltage.

Default multiplier is x16 and default CPU voltage is 1.30V

multiplier___voltage__GHz___result
_x16________1.30____3.2___stable
_x17________1.30____3.4___stable
_x18________1.30____3.6___unstable
_x18________1.34____3.6___stable
_x19________1.38____3.8___stable on smp for 12 hours and counting

3.8 is where I called it good as well, it seemed like a good balance of heat/power/noise/etc.

Just out of curiosity, what kind of ppd are you getting with the 1090T@3.8?

Right now I'm getting an estimated 8600 PPD on a 6701, a little over 8 minutes per fold, so I'm pleased. Just for reference, my 965 at stock was at about 14:30 TPF, so quite a large difference.


*And welcome CupCak3 to the race* :thumbsup:
 
Last edited:
My 1055T was a bitch to get stable. I ended at 1.425V for CPU, 1.65V RAM and 1.25V chipset. Far cry from my Intels which were a cinch to overclock and running at stock voltages for the most part.
 
I think smp is still better on linux.:hmm:

hfm_mark.png


ubuntu_smp is Q6600@3GHz
mark_smp is Q6600@3GHz
bugzilla_smp is Q9550@3.8GHz
ratpack_smp is 1090T@3.8GHz
 
Here are the stats as of December 21, 2010. 18:25 UTC:

FAH-Race-dec21aa.jpg


Welcome to the race, CupCak3!

Hmmm, the teams produce very nice results in spite of quite a few crunchers running in the PrimeGrid Race. I have not looked at the averages yet, but I suspect that the difference before the PG-race, during the PG-race and after the PG-race won't be that big. As for now, I'll not add any handicap yet, if necessary, I'll do that after the PG-race is over and 3 - 4 days have passed.
 
I think smp is still better on linux.:hmm:

hfm_mark.png


ubuntu_smp is Q6600@3GHz
mark_smp is Q6600@3GHz
bugzilla_smp is Q9550@3.8GHz
ratpack_smp is 1090T@3.8GHz

Does look convincing. I was just noticing that I had another 6701 and it is estimating about 11k ppd. I think the difference between the last 6701 I had and this one is the gaming. I'm assuming that I'm losing out on bonus points that would come with a quicker turn in time due to the tremendous slowdown in folding while gaming.
 
I can't get any GPU units on multiple machines (but not all, so not network)

So bad ppd today
 
Do I have something wrong with my setup? My CPU client yesterday only got 1400 points and its an i7-860 @ 3.4! I was expecting much higher than this. Right now I have the -smp -advmethods flags on. The only other program really running 90% of the time was a single GPU client. I'm running these on win7 64bit.

Thanks!
 
Last edited:
In the setup...

  • did you choose the "big" option when asked about the size of the WUs?
  • how much memory did you assign?
  • have you given 100% of the CPU-activity?

Just a few ideas ... 🙂
 
I checked all those and ended up installed the HFM app.

Right now it looks like that first full day may have just been some sort of fluke (hopefully). With current WU I'm set to get ~7k ppd.
 
I can't get any GPU units on multiple machines (but not all, so not network)
Mark, I quoted you as a starting point to my question, I am assuming you are running the GPU3 client?
Others who are running the GPU3 client have had times when they could not get WUs?

I am running the GPU2 client and haven't had to wait on getting a WU.
I wonder if others running GPU2 never have to wait?

Does the GPU3 give that much better PPD?
If it doesn't, maybe you could switch to GPU2 and never wait?

Just wondering 🙂

EDIT: Thanks for the stats and doing such a great job being the Conductor of the race, Peter 🙂
 
Last edited:
GLeeM, my understanding is that the GPU3 is "required" for the Fermi cards. I haven't had an issue getting work units (as far as I know). Other problems...... but not that. 😀

Had a network switch get tempermental............... 😡

Folding can be worse than herding cats sometimes............
 
GLeeM, my understanding is that the GPU3 is "required" for the Fermi cards. I haven't had an issue getting work units (as far as I know). Other problems...... but not that. 😀

Had a network switch get tempermental............... 😡

Folding can be worse than herding cats sometimes............

I know! I just lost power AN HOUR before I'm leaving for the airport. Now the network went down so the gateway server was being finicky on the UPS, and all of my BIGADV and GPU rigs, not on UPSes because they draw too much power, went all down. WHY?

(and I have 2 cats, so I know how evil they can be when you need to herd them or collect them to go somewhere... and then they pee in my car)
 
/snipped a bit/
Folding can be worse than herding cats sometimes............

I agree. And I have (just now) seven cats at home ...

If all your comps are the same and healthy then it may be "set and forget". If you have different hardware, drivers, software, and have it mixed well you do well to check the comps once or twice a day ...

I have a comp with a temperamental NIC ... it works for 23 hours perfectly and then it quits for approx 2.5 hours, and then it works OK. Since the GPU produces results more often then once every 2.5 hours, the sending of the results locks the NIC and I have to restart the computer ... LoL.

This does not matter in BOINC becuase my cache is approx 5 days and since BOINC sends the results as soon as the card is up again. And BOINC does not lock the NIC which F@H does.

/Petrus quits whining/
 
Here are the stats as of December 22, 2010, 18:07 UTC:

FAH-Race-dec22aa.jpg


The stats show a partial recovery after the PrimeGrid race. Most PG-racers have reactivated their GPUs and their PPD is rising. It will take some time before the numbers stabilize though...

The Yetis have been crunching particularly well: the have cut the Snowmen's lead by 293K points in the past three days. The difference between the teams is now less then 2% and less than 500K points; this is not so much considering that the total production during the race has been 22 884 314 points ...

Well crunched, TeAm! 🙂
 
Mark, I quoted you as a starting point to my question, I am assuming you are running the GPU3 client?
Others who are running the GPU3 client have had times when they could not get WUs?

I am running the GPU2 client and haven't had to wait on getting a WU.
I wonder if others running GPU2 never have to wait?

Does the GPU3 give that much better PPD?
If it doesn't, maybe you could switch to GPU2 and never wait?

Just wondering 🙂

EDIT: Thanks for the stats and doing such a great job being the Conductor of the race, Peter 🙂
The GPU3 client is required for fermi cards, and I have 8 fermi cards I do have 4 other cards, and 2 are on gpu2 and 2 are on gpu3, no real difference in ppd on the same cards.

BUT, my Fermi cards do 11,500(460@850) and 13,500(470@700) vs my best gpu2 card is a 260 @655 doing less than 7000.

You can see the numbers for yourself below(all the cards below 10k ppd are 9800gtx+, or variants, or 260 gtx(those are the 7k ones). Not much difference in any of them):
fahmon_280k.JPG
 
Last edited:
Well I figured out my problem; it is the 6701 WUs. I can't WAIT until I'm eligible for bonuses... too bad it won't be for another 2-3 days. 🙁
 
Hi all. I received the following error with my new geforce 210:

[02:07:13] Folding@Home GPU Core
[02:07:13] Version 2.15 (Tue Nov 16 08:44:57 PST 2010)
[02:07:13]
[02:07:13] Compiler : Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.42 for 80x86
[02:07:13] Build host: amoeba
[02:07:13] Board Type: NVIDIA/CUDA
[02:07:13] Core : x=15
[02:07:13] Window's signal control handler registered.
[02:07:13] Preparing to commence simulation
[02:07:13] - Looking at optimizations...
[02:07:13] DeleteFrameFiles: successfully deleted file=work/wudata_03.ckp
[02:07:13] - Created dyn
[02:07:13] - Files status OK
[02:07:13] sizeof(CORE_PACKET_HDR) = 512 file=<>
[02:07:13] - Expanded 42854 -> 167707 (decompressed 391.3 percent)
[02:07:13] Called DecompressByteArray: compressed_data_size=42854 data_size=167707, decompressed_data_size=167707 diff=0
[02:07:13] - Digital signature verified
[02:07:13]
[02:07:13] Project: 11177 (Run 4, Clone 39, Gen 17)
[02:07:13]
[02:07:13] Assembly optimizations on if available.
[02:07:13] Entering M.D.
[02:07:15] + Working...
[02:07:15] Tpr hash work/wudata_03.tpr: 3245242131 1057701753 1003681263 2588649385 2193182994
[02:07:15] Working on ALZHEIMER'S DISEASE AMYLOID
[02:07:15] Client config found, loading data.
[02:07:16] Starting GUI Server
[02:07:16] Finished fah_main
[02:07:16]
[02:07:16] Successful run
[02:07:16] DynamicWrapper: Finished Work Unit: sleep=10000
[02:07:26] Reserved 0 bytes for xtc file; Cosm status=0
[02:07:26] Reserved 0 0 786430464 bytes for arc file=<work/wudata_03.trr> Cosm status=0
[02:07:26] Allocated 0 bytes for edr file
[02:07:26] Error: could not open bedfile, but going on anyway
[02:07:26] - Checksum of file (work/wudata_03.edr) read from disk doesn't match
[02:07:26] edrfile file hash check failed.
[02:07:26]
[02:07:26] Folding@home Core Shutdown: FILE_IO_ERROR


Does anyone with experience know what the problem is. I tried the older installers and console versions for the gpu but I guess I have to use the newest graphical interface judging by error messages given by the other versions.
 
Sorry, I wish i could help.
I would erase the work directory and start a new WU ... but that may not be the best solution. 🙁
OTOH: I seem to remember that there were problems with this kind of WUs. I do not have the opportunity to search now (I am at work), but check it out in the Foldin-forums!
 
Back
Top