• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Formula Boinc Sprints 2018

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I expressed concern in the weekly stats, that our 4th place victory could be due to AnandTech having 50+ 'permanent' SETI users, helping us to do so well in this Sprint. But, it appears Francophone, the Knights, and USA, who were runner's up behind us this Sprint, have a strong presence in SETI in the Marathon, much superior to our TeAm in fact. I am hoping this means they only did as well as they did, due to their teams also having members who only run SETI.
 
@yodap, @HK-Steve, @bill1024, congratulations to you for making it past some SETI@Home heavyweights and winning yourselves a medal!
So, XS are competing now too; I don't recall them doing so during last season.

Thanks, this was a strange one. We were first after the first 2 updates then the bottom fell out and down to 8th.
Congrats on 4th overall, that's huge. Just think of all the gold you could get in our league.😉

I think [H] will be too much for us but don't tell coleslaw that.😛 They were able to recruit a miner but that had nothing to do with the outcome. They are strong without him.

Alright, on to the next one.
 
atp1916 is picking things up quick and is finding out all my warnings are coming true when it comes to a rig designed for mining vs for DC. He is now seeking out some better CPU's so that he can feed the cards better. Then he will address whether they are bottle necked due to the riser cables and slower PCIe slots. We too have some users that are only SETI but they weren't the meat of our production this time. We have some really solid players that are pretty active this year and can really put up solid points. The return of Phoenicis has really helped due to both his ability to bring high end hardware as well as being really versed in the DC projects. fastgeek and his army along with skillz smaller army and recruiting ability. ChristianVirtual has given us some great tools for reference. There are too many people to really give a shout out too, but we definitely have a pretty good team this year. I just wish we had some of the folks that got bumped to League 1 back for some good competition. XtremeSystems may give us a running in some of the sprints, but I honestly don't know if they are truly prepared for the long haul.
 
Then he will address whether they are bottle necked due to the riser cables and slower PCIe slots.
I got curious and wanted to check what amount of PCIe bandwidth SETI@Home uses... but it's Maintenance Tuesday right now. 🙄

May depend a lot on the particular SETI GPU application version and OS though. Curiously, I am seeing a lot less PCIe traffic with Folding@Home's fahcore_21 on Linux now than I remember having seen on Windows (with Pascal cards in v3 x16...x8 slots). I wonder if the Windows application + driver stack loads and unloads the payload all the time, while the Linux stack doesn't.

Edit,
I rebooted the very same PC into Windows and re-ran Folding@Home. Used "nvidia-smi dmon -s cput -d 5" on Linux and on Windows, and indeed the RX/TX values are ~30x/~10x higher on Windows compared to Linux.
 
Last edited:
Keep in mind an x8 slot should be sufficient for pretty much all projects. It is when you get into using 1x slots where you will probably see the bottle neck. Especially from high end GPU's and running multiple work units on each. Since some like a full CPU thread per work unit, I'm sure there will be some latency issues as there are only so many data lanes overall.

Edit: he is running 8 GPU's per rig
 
I didn't save yesterday's measurements into a log, but IIRC the bus usage of F@H/Windows was somewhat above the equivalent of PCIe v3 x4. That of F@H/Linux was generally below the equivalent of PCIe v1 x1, though there may have been rare short peaks above this. Cards measured were 1080Ti. Time permitting, I'll check on SETI@Home later.
 
Here are measurements of SETI@Home's PCIe bus utilization, taken from 3x GTX 1080Ti in PCIe v3 x16/ x16/ x8 slots, obtained with nvidia-smi with 1 second reporting period, running for over an hour on Windows and on Linux, respectively.

The numbers are: average / 90%-quantile / peak.
RX = PCIe reception, TX = PCIe transmission

Windows 7 Pro 64bit, Nvidia driver 387.92, SETI@home v8 8.22 opencl_nvidia_SoG
RX: 920 / 1,400 / 1,800 MB/s
TX: 160 / 220 / 520 MB/s​

Linux Mint 64bit, Nvidia driver 384.111, setiathome_v8 8.01 cuda90
RX: 340 / 520 / 1,400 MB/s
TX: 80 / 110 / 2,100 MB/s​

AFAIR, typical cryptocoin mining rigs use PCIe v2 x1 risers, which have 500 MB/s bidirectional datarate. That would be inadequate for SETI@Home according to my numbers, on Windows as well as on Linux, at least for big GPUs.

I posted measurements with Folding@Home in the other thread.
 
Last edited:
As usual, amazing work Stefan! Just another prime example of how you cheat in these Sprints, PrimeGrid, and any other DC task you set your incredible, analytic mind to. 😀

Edit: And thanks for helping me learn more of my native language, lol.

From Wiki:
In statistics and the theory of probability, quantiles are cut points dividing the range of a probability distribution into contiguous intervals with equal probabilities, or dividing the observations in a sample in the same way. There is one less quantile than the number of groups created.
 
Thanks. ... The 90% quantile or 0.9 quantile is also known as 0.9 fractile or 90th percentile. Like the definition which you quoted is saying, it means in this context that 90 % of all observed values were below it, or 10 % of all observed values were above it. I included this percentile here because my (unproven) thought was:
  • The bus bandwidth should be a lot better than just the average bandwidth, because the application requires bus bandwidth in bursts.
  • On the other hand, if the bus is sized such that it can support even the rare and brief peaks of bus utilization, then it is over-engineered.
  • But if the bus is sized such that it can carry the application's needs for, say, 90 % of the time, application performance is probably almost as good as if the bus was sufficient all the time.
 
Thanks StefanR5R, I remember an H member posted some numbers on this a little bit for FAH before, but did not want to dig through the forums to find the specific post again. And I don't think he went so far as getting x1 numbers. I've been curious for a while how much "efficiency" is lost using them. However, I don't have any top of the line cards to play with to get them myself.
 
Less than 8 hrs (4th April 2018, 21:00UTC) for the 2nd FB Sprint project announcement.
Get your browser and rig/farm ready.
 
Just as my Primegrid output is ramping up, oh well...
Assuming this is a CPU sprint, I plan to leave a couple of computers with AVX on PrimeGrid. But it sure is hard to plan when you don't know what the project will be.
 
Long tasks, easy bunkering, seem to be plenty of tasks as well. We will move up in the Marathon on this one as well (most likely). No Wingman, so instant scoring.
 
It seems my desktop Athlon has throttled quite a bit even though my CPU and VRM temp look very cool. And by the look of it, I think my Athlon has similar duration per task with my Core2Duo laptop and that worries me.
My theory is due to the nature of CMT, some part of the core is taking big performance hit. So, I need some validation from anybody here that also uses Bulldozer CPU and its derivatives.
 
It seems my desktop Athlon has throttled quite a bit even though my CPU and VRM temp look very cool. And by the look of it, I think my Athlon has similar duration per task with my Core2Duo laptop and that worries me.
My theory is due to the nature of CMT, some part of the core is taking big performance hit. So, I need some validation from anybody here that also uses Bulldozer CPU and its derivatives.
You might find this thread interesting: https://boinc.vgtu.lt/vtuathome/forum_thread.php?id=68
 
It's supposed to be interesting if there's conversation in there. Unfortunately, even Vadimas didn't answer his question. Kinda sad, really.
The last time I saw my PC, task completion estimation was about 24 hours.

edit: I find proper comparison.
https://boinc.vgtu.lt/vtuathome/hosts_user.php?userid=3035
Most of his completed tasks on FX systems were close to 20 hours mark, so possibly mine looks normal.
 
Last edited:
My Ivy Bridge-E at 4.2 GHz, HT on, is now at 30...70 % completion of its first twelve tasks, and the estimated run times are 17...34 hours. When it downloaded the tasks, it initially estimated 8.5 hours run time. I am switching it down to 50 % CPU usage now. Funny: HWiNFO64 consistently shows 120 W CPU package power when boinc+VGTU is set at 100 % CPU usage, but 130 W when at 50 %. This hints that HyperThreading may be detrimental to overall VGTU throughput on Ivy Bridge-E.

Furthermore, my Haswell laptop crashed after merely 2 hours of running VGTU. It had done Cosmology@Home for weeks without incident before that. With VGTU, the laptop's BIOS is driving the CPU close to 90 °C, and that's really not good for exotic user requirements like not crashing every few hours. I'll look into curing this with ThrottleStop.
 
We're #1 at the first update! 🙂

1 TeAm AnandTech 25 132,184
2 Gridcoin 18 79,923
3 Overclock.net 15 43,086
4 SETI.USA 12 27,880
5 UK BOINC Team 10 24,863
6 Planet 3DNow! 8 13,085
7 L'Alliance Francophone 6 7,930
8 SETI.Germany 4 336

I'm putting one Intel machine back on PrimeGrid.
 
Well the French popped up above us, so I switched back from PrimeGrid. This is a really good project for bunkering, so I suppose we should expect surprises.
 
My Ivy Bridge-E at 4.2 GHz, HT on [...]
Funny: HWiNFO64 consistently shows 120 W CPU package power when boinc+VGTU is set at 100 % CPU usage, but 130 W when at 50 %.
Now it's back down to 115 W (50 % of hardware threads used). This application leaves me puzzled.
 
Back
Top