• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

OK, Who Has the Biggest OGR Stub?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
RC-
By default, the client will flush to Dnet's keyservers if the proxy would be down. This is a way around that.

[networking]
autofindkeyserver=no
keyserver=jator.2y.net
nofallback=true

Add the 'nofallback=true' line to your ini file. It will just switch over to RC5 when you run out of OGR work. I normally use this 'frequent-threshold-checks=3' also to keep my work flushed and in-buffers full. If you are on a dial up and use passive dial-up detection, this works the same way.

[buffers]
checkpoint-filename=cp
frequent-threshold-checks=3

Very nice OGR herd BTW. Don't be suprised if there is an OGR-25 stampede in the next few days though. Mika's proxy did not flush all day, and half the team uses it either directly or through the proxies below it. Also many machines are finishing the OGR-24 work that has been in the buffers and are switching to OGR-25. that is the case here, as over half of my machines just went to '25 today.

viz


 
viztech:

My ini file -

[misc]
project-priority=OGR,DES=0,CSC=0,RC5=0

[networking]
autofindkeyserver=no
keyserver=jator.2y.net:2064

[logging]
log-file-limit=1000
log-file=ta_ogr.log
log-file-type=fifo

[rc5]
randomprefix=192

[buffers]
checkpoint-filename=checkpoint.dat
frequent-threshold-checks=2
threshold-check-interval=0:30

I will compare my current ini file with what you have recommended.

Yes, I know there exists a lot of very big herds within TA. The only reason I placed fifth is because OGR 25 has only recently started. Thought I might as well take advantage of the situation. 🙂

I would need at least 25 1 GHZ Athlons to compete with some of the people in RC5 TA. I have access to several Sun machines at work, but unfortunately most of the Sun boxes do not perform very well with RC5 or OGR.
 
Hehe-you did get in on the ground floor of OGR-25 though, didn't you!

Aren't those Sun boxes good at Seti? I think one or more of the TA Seti people run it over the weekend only so as to not interfere with regular work.

viz
 
[Aug 06 17:41:38 UTC] Completed OGR stub 25/1-10-4-13-6-18 (465,724,253,933 nodes)
0.00:11:17.60 - [8,710,086.56 nodes/sec]
[Aug 06 17:41:38 UTC] Loaded OGR stub 25/1-10-4-13-6-20
[Aug 06 17:41:38 UTC] Summary: 2 OGR packets (18.06 Gnodes)
0.00:35:27.38 - [8.49 Mnodes/s]
[Aug 06 17:41:38 UTC] 23 OGR packets (23 work units) remain in buff-in.ogr
[Aug 06 17:41:38 UTC] 1 OGR packet (1 work unit) is in buff-out.ogr
[Aug 06 17:42:04 UTC] The perproxy says: "Jator's Proxy"
[Aug 06 17:42:04 UTC] Sent 1 OGR packet (1 work unit) to server
 
They just keep getting longer, and longer....
25/1-7-20-16-17-4 "134,981,339,965"
25/1-7-20-16-17-5 "168,891,707,361"
 
Well, here's the update on my monster 4-stub...

After letting it run unmolested for the entire weekend, it has completed a whopping 4.5% of the stub!! 😉

So far this thing is averaging 1% a day ... Anybody think I might end up with a big one?

🙂

JHutch
 
Yeah, I'd say it's gonna be pretty big.
If youve been running four and a half days on it so far, that makes about 1.5 Teranodes thus far 🙂
 
Here's my biggest so far:
[Aug 07 11:10:15 UTC] Completed OGR stub 25/1-3-14-7-8-19 (407,143,318,559 ... 0.17:25:36.96 - [2,304,219.31 nodes/sec]
 
Well, I gave up on the 2 super stubs that were in progress. OGR-25 4 mark stubs are something else! They were working on Athlon 600s that are on 50-60 hours a week. One was 2.1% complete, and the other 3.1%. I deleted them and downloaded 6 mark stubs to work on.
My P3-500 is about 20% done with the 5 mark stub, so I will follow thru on that one.

My largest to date 25/1-2-11-16-9-24,280809324513,1,1,8010
This is a far cry from a couple posted so far, but what the hay.

viz
 
My 6 point ones have been around the 200-300+ size lately. What were they thinking with the 4/5-point ones??? I still remember that after one hours work, 4.1% was still 4.1%. It would have taken weeks with that PII 333.
 
Two of my remote machines were stuck with some of the 5 point OGR-25 stubs. One of them took around 1.4 Tnodes and the other one.... well, take a look. 🙂

[Aug 08 06:23:51 UTC] Completed OGR stub 25/1-5-8-34-21 (1,510,171,234,453 nodes)
4.20:59:56.35 - [3,585,432.79 nodes/sec]

Needless to say, I was able to get out to the site and "cleanse" the buff-in's so they're working on nice NORMAL stubs now. 😉

-Brian
 
Well, here's the one week monster node update! 😉

My OGR 25 - 4 node stub, after working for one full week on a P3/600, is at a whopping 6.0% !!!!

That should put this stub at over 2 TNodes currently and climbing (every three days of work is about 1 TNodes on this machine).

On a side note, I found a remote machine working on a 5-stub. It has been working on it since August 3, so I'm gonna let it finish. It could be a monster, too!

Later,
JHutch
 
I didn't see that thread until you pointed it out. It is disappointing, to say the least. But, as I said in that thread, I'll keep this one machine crunching on my monster stub just for gee-whiz factor. The work is valid, they just don't have the means of having the work show up in the stats...

JHutch
 
Are you sure the work is valid? If they can't get it into the stats I'd imagine it might also have a hard time working with their keymaster.

Edit: NM, I found your post in the other thread.
 
5 Stub
[Aug 13 10:05:29 UTC] Completed OGR stub 25/1-5-9-29-16 (2,820,335,138,127 nodes)
9.16:49:26.73 - [3,364,885.54 nodes/sec]
 
Well, I gave up on the 4-stub monster ... After 2 full weeks, it was still well under 10%. I figured at the rate it was going, I'd see the end of it sometime next year ... 😉 So, I pulled it into a backup file, flushed any remaining big stubs from the buffer and started running normal sized stubs ...

JHutch
 
I don't blame you for dumping that stub JHutch. I was wondering what had happened to you and this monster, so now I know.

Thanx for the report.
viz
 
Back
Top