• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Folding conundrum regarding P2613

FaaR

Golden Member
I have two main rigs, one Dell XPS 720 based around a C2Quad Extreme 2.66GHz, and one home-built Ci7 2.66GHz system.

Both rigs are currently overclocked, the C2Q runs at 3.2GHz with 800MHz DDR2 (coz one of my 1066 sticks went on the fritz and I had to slot in 800MHz memory to replace it) and the i7 at ~3.4GHz with 1600MHz DDR3 (running at 1300, due to RAM becoming monstrously hot with all 6 sockets populated).

Both rigs run four separate CPU folding clients and two NV G80 GPU clients on the C2Q box and 2 ATI 4890 clients on the i7, but the GPU clients aren't the issue here. Btw, I can't be arsed faffing with the SMP client, it's terribly user unfriendly to install and config, not that setting up multiple single clients is much better... :/

Anyway... The C2Q system runs four versions of project 2613, and FahMon reports an ETA of 4-9 days (!) to complete them all, giving truly dysmal PPD figures of around 70-175... The i7 runs a bunch of different projects, one of which is a 2613 that is projected to complete in 1 1/2 days, with a PPD of about 570 for that particular project. That's WITH me gaming and doing stuff on that rig at the same time; the C2Q system is pretty much only folding these days.

So...WTH?!

Is the 2613 project so demanding on the CPU that running 4 of them totally thrashes the CPU caches and bottlenecks the FSB and RAM, or what's goin' on here?! I folded on both of these PCs before summer (and then took a break due to the gigantic heat output), and the C2Q box ran just fine. It was a little slower than the i7, but nothing worth writing home about, certainly not this staggering time difference.

I did check out P2613 on the FaH site, it's simulating some 308.000 atoms and the largest currently running project by far from what I can tell, but this is extreme IMO... Task manager doesn't show any weirdness either. 4x FahCore_78.exe consumes between 17 and 26% CPU each, a core 11 and a core 14 consume around 0-6% (this should be the NV GPU clients), and then the usual list of background junk, ALL at 0%.
 
Yes, the 2613 does better with more cache.

I would guess that with four of them they would not do as good as one of them and three other type WUs.
 
Thank you for your reply. I was fearing I was going nuts here or something. 🙂

I'll have two WUs done on the C2Q rig done in about 3 days of folding time, hopefully I won't get another batch of 2613s then - ugh! - so that should hopefully speed up the two remaining WUs also (one which still says over 8 days ETA, despite it's 25% completed already...)

If cache's a factor on this particular project then I'm somewhat out of luck I suppose because unfortunately I have the oldest revision of the C2Q processor, with 4MB L2 per chip. I did buy two new Corsair EPP-compatible 1066 sticks yesterday but haven't bothered to pop them in yet. I don't expect it to make much of a difference though, it's mostly for my own peace of mind. I hate half-measures in my PCs, like mixing different types of memory and so on. 🙂

With such big WUs processing so slowly I feel like I'm back in the 1990s, running SETI@Home on my now almost completely junked AMD K6 rig, heh.
 
Originally posted by: FaaR
Anyway... The C2Q system runs four versions of project 2613, and FahMon reports an ETA of 4-9 days (!) to complete them all, giving truly dysmal PPD figures of around 70-175...

Looking at this again ... it is not right!

Is there something else using CPU cycles? Check Task Manager to see that each client is getting 25%.

That C2Q has to do better than a P4 at 2.3GHz.
 
Each client right now varies between say 22-26% right now, due to the two GPU clients also wanting a bit of CPU time.

It's a bit more even today than it was when I started the thread; the slowest client only got 17% then - that's the client that was estimated to take ~9 days. Now the projects are between 30-40% complete and folding at ~177PPD for the fastest and ~123PPD for the slowest, according to FahMon.

It's an old version now though downloaded some time in may this year I think, maybe its calculations a bit off or something. 😛

I did bump RAM speed back up to 1066MHz. Had to take out my new sticks of RAM though because the machine refuses to POST anymore with 4 EPP-compatible DIMMs installed, so I'm down to 2GB. It doesn't appear to hold me back though, there's still plenty of free RAM left even with 6 clients going (over a gigabyte right now actually).

Seems my mobo or chipset is slightly busted after that one time when I experimented and set DRAM command rate in the BIOS to 1T instead of 2T. That was what killed that DIMM, so now it'll only work with 2 sticks at 1066 it seems... Ran Memtest86+ with 2 dimms in either pair of sockets, worked fine, no errors. Just won't work with 4 tho. 🙁
 
I don't know, it just seems like the C2Q would do better.

Have you considered running a "Notfred's VM Appliance"? It is a virtual machine that runs the Linux SMP client on two cores and gives very good ppd. It seemed easier to set up than the Windows SMP. It's results upload size is quite a bit bigger than Win SMP though, so if you have an upload limit you might have to watch it.
 
Thank you Gleem for taking time to try to solve this riddle.

Unfortunately, the entire thread is now moot, since gott-verdammt windows vista decided to crash when putting it into hibernation mode overnight, and upon bootup F@H decided to simply toss away all work already done. Thus I deleted all the work folders and the queue.dat, and now I have a bunch of different projects running again instead, giving ~75-100% higher PPDs than the previous (cursed) batch of WUs...

I don't understand why it even bothers to do checkpoint saves along the way if it isn't going to resume from the last bloody checkpoint! That's the better part of a WEEK (real time, as my PCs don't run 24/7) of lost folding down the drain, just like that.

And it's not even the first time it does this to me either after one of my PCs hang/crash. The F@H client is a real piece of crap really, makes me wonder why I even bother. 🙁
 
Back
Top