• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

OK, Who Has the Biggest OGR Stub?

Viztech

Platinum Member

Thunder said that he had one over 111 Gnodes.
My dualium 100 is working on a set of twins that promise to break 110 Gnodes each. I'll know more in a couple of days.

viz
 
24/5-14-12-4-2 (36,115,281,078 nodes)
24/10-5-9-8-21 (38,329,237,790 nodes)
24/10-6-7-8-20 (38,462,092,407 nodes)
24/10-6-7-12-14 (38,987,836,413 nodes)
24/10-5-9-8-20 (39,819,345,713 nodes)
24/5-10-3-9-24 (46,927,961,917 nodes)
24/5-10-3-9-23(47,379,603,697 nodes)
24/10-6-7-12-15 (45,028,274,598 nodes)
24/10-6-7-5-3 (45,567,364,733 nodes)
24/10-6-7-8-19 (46,726,496,463 nodes)
24/10-6-7-12-9 (47,468,306,746 nodes)
24/5-10-3-9-21 (51,820,891,286 nodes)
24/10-6-7-5-9 (58,481,405,007 nodes)
24/5-10-3-9-20 (59,330,166,890 nodes)
 
Back in February, when OGR first started (when stubs started with four numbers instead of five), the first stub I had was 880 Gnodes.

The biggest I've had this time was around 40 or 50 Gnodes, though.
 
I had a P3 450 munching on a 102 Gnode stub for about 10 hours. I've noticed quite a few in the 50-60 Gnode range also.
 
I don't know about big (I think the largest I've had is in the 80Gnodes range), but I've had some really SMALL stubs ...

The smallest I could find was 121,982,689 nodes and it took my computer all of 30 seconds to finish it... 🙂

I actually had a whole string of short ones ... I went through my 25 block buffer in about 10 minutes ...

🙂

JHutch
 
Yeah, the really vary in size. Based on my initial ones, I downloaded 500 thinking it'd finish on my old Celery in less than a week. After some more reasonable averaging, it seems that it might take 3 weeks to finish.

So, it means I have to scrap probably 425 of the stubs and just download 25 stubs at a time per day so I don't waste time cracking stuff that has already been recycled.
 
Please post your biggest OGR stub and what it was. You can just cut and paste from the log the way I did. Thanks.
 
I got all small stubs.



<< [Jul 27 01:42:04 UTC] Loaded OGR stub 24/6-2-22-26-11 (44.10% done)
[Jul 27 01:42:04 UTC] Summary: 16 OGR packets (140.16 Gnodes)
0.14:59:12.66 - [2.59 Mnodes/s]
>>



Average 8-9 G nodes per stub
 
Mine are the biggest?

24/2-1-12-7-10 (112,100,361,411 nodes)
24/2-1-12-7-11 (106,911,171,732 nodes)
24/2-13-3-8-17 (86,217,770,949 nodes)
24/4-1-9-8-15 (80,133,172,918 nodes)
 
It looks like your the winner so far, biohazard.
Those twins I was taking about earlier from my dualium were only about 60G. The crunchometer doesn't track the progress accurately on OGR, so that had me fooled.

viz
 
I didn't turn on the log, but on my dual machine one of my cpu just finished one with 121.5 Gnodes. It takes almost 19hrs to finish. :Q
 
Here's my 10 biggest:

117,588,996,579
109,443,660,462
103,369,339,917
102,464,516,100
102,019,989,646
101,909,970,846
98,030,336,677
97,679,586,869
95,080,812,499
95,272,135,482

 
[Jul 27 09:54:04 UTC] Completed OGR stub 24/3-7-4-13-5 (92,681,714,226 nodes)
0.00:43:22.79 - [2,486,155.69 nodes/sec]

that is my biggest so far.
 
chris@kyarsgaard.org,24/2-6-9-7-13,117588996579,1,1,8010
chris@kyarsgaard.org,24/2-6-9-7-12,109443660462,1,1,8010
chris@kyarsgaard.org,24/2-6-9-11-13,103369339917,1,1,8010
chris@kyarsgaard.org,24/2-6-9-7-11,102464516100,1,1,8010
chris@kyarsgaard.org,24/1-6-9-13-4,102019989646,1,1,8010
chris@kyarsgaard.org,24/4-2-12-13-7,101909970846,1,1,8010
chris@kyarsgaard.org,24/3-9-4-11-14,98030336677,1,1,8010
chris@kyarsgaard.org,24/2-3-10-4-21,97679586869,1,1,8010
hutch@modex.com,24/2-7-3-13-15,95272135482,1,1,8010
chris@kyarsgaard.org,24/2-11-5-7-15,95080812499,1,1,8010
 
I finally found a couple worth mentioning.
24/1-6-12-14-9,84231325932,1,1,8010
24/4-2-16-1-14,83806951243,1,1,8010

These actually were done by 2 of my fastest machines. For the longest time, my fast machines were getting the little ones.

viz

 
Well, I just found my fastest machine working on one of the new OGR25 nodes with the 4-node stub (the experimental ones they sent out a few days ago). Since it has already been working on it for about 12 hours, I'm gonna let it finish to see just how big it is ... only at 2.5% so far!!! 🙂

So far, I figure it should be at least 150-175 Gnodes (since this machine cracks at 4Mnodes/sec when nothing else is happening). Hopefully it will be finished when I get back to work in the morning.

I'll let you guys know how big this guy gets.

JHutch

FYI, the stub is 25/1-24-12-29


 
RC5, you could be right ...

I let it run overnight and it finished a whole whopping 0.2% more of the stub!! 🙂

Now it has become a curiosity thing. I have to finish this stub, just to see how long it really is!

Rough guess puts it at 345.6 GNodes (if it ends RIGHT NOW). Since it is showing just over 2% done, that tells me this thing could easily reach the TNode range before its finished ...

JHutch
 
Heck, it'll probably be a lot bigger than that. That node you're working on could easily be a teranode or more. Like I said earlier, I had an OGR-24 stub with the 4-node start that was 880 Gnodes. And, theoretically (I think) an OGR-25 stub with the same 4-node start would be 25 times as big.

I'm really curious as to how big that thing'll be 🙂
 
I hope you have checkpoints enabled, just in case...
You dont wanna lose such a huge work should the system fail 😉
 
This one is on a dual Ppro 200

[Aug 03 00:43:42 UTC] Loaded OGR stub 25/1-5-41-18 (2.10% done)
[Aug 03 00:43:44 UTC] Loaded OGR stub 25/1-5-45-13 (2.10% done)
[Aug 03 00:43:45 UTC] Summary: 109 OGR packets (1.90 Tnodes)
9.01:50:21.01 - [2.43 Mnodes/s]


After 12 hrs

[Aug 03 13:03:23 UTC] Automatic processor detection found 2processors.
[Aug 03 13:03:24 UTC] Loading crunchers with work...
[Aug 03 13:03:25 UTC] Loaded OGR stub 25/1-5-41-18 (2.10% done)
[Aug 03 13:03:28 UTC] Loaded OGR stub 25/1-5-45-13 (2.10% done)
[Aug 03 13:03:28 UTC] Summary: 109 OGR packets (1.90 Tnodes)
9.14:10:03.76 - [2.30 Mnodes/s]
[Aug 03 13:03:29 UTC] 91 OGR packets (91 work units) remain in buff-in.ogr
[Aug 03 13:03:30 UTC] 0 OGR packets (0 work units) are in buff-out.ogr
[Aug 03 13:03:31 UTC] 2 crunchers ('a' and 'b') have been started.







This is a Celly @450

[Aug 02 15:38:39 UTC] Loaded OGR stub 25/1-5-40-2 (7.10% done)
[Aug 02 15:38:39 UTC] Summary: 109 OGR packets (1.37 Tnodes)
5.02:24:41.81 - [3.12 Mnodes/s]

[Aug 03 09:31:13 UTC] Loaded OGR stub 25/1-5-40-2 (7.10% done)
[Aug 03 09:31:13 UTC] Summary: 109 OGR packets (1.37 Tnodes)
5.02:24:41.81 - [3.12 Mnodes/s]

After 22 hrs


[Aug 03 13:03:42 UTC] Automatic processor detection found 1 processor.
[Aug 03 13:03:42 UTC] Loaded OGR stub 25/1-5-40-2 (7.20% done)
[Aug 03 13:03:42 UTC] Summary: 109 OGR packets (1.37 Tnodes)
5.02:24:41.81 - [3.12 Mnodes/s]
[Aug 03 13:03:42 UTC] 91 OGR packets (91 work units) remain in buff-in.ogr
[Aug 03 13:03:42 UTC] 0 OGR packets (0 work units) are in buff-out.ogr
[Aug 03 13:03:42 UTC] 1 cruncher has been started.
...R




Lost this when A WIN 95 machine locked up.(if you can believe that?😀)


[Aug 02 11:58:36 UTC] Loaded OGR stub 25/1-5-40-4
[Aug 02 11:58:36 UTC] Summary: 116 OGR packets (1.85 Tnodes)



 
Back
Top