Endgame124
Senior member
- Feb 11, 2008
- 899
- 561
- 136
Do you want numbers for a R250X, a A10-7870K, or the integrated GPU from the 7870K? I can grab those before I decommission the PC.
but...but...but... the cost of electricity bill would make it up for the difference in less than a year.That is true. I could buy 18 S9150's for what one VII Pro costs, though![]()
What I saw in my Boinc when I first setup milky way was:Thanks for providing an averaged number and rig specs
Your GPU (GT 710) times for 227.5x WUs 3171, 3174, 3257, 3278, 3293, average of 3234.6s
Did you leave a free CPU core for GPU crunching?
For your Ryzen 9 5950X 16-Core (what clock speed does it tend to run at?).
Hmm, looking at the 1st 5 pages of CPU valid results they are all 30-34 credit WUs, looks like CPUs don't get the 227.5x credit WUs, darn.
I'll have to add different benchmarking requirements for CPUs then, looking at your CPU the times vary from about 1400-1600s with credit ranging from about 30-35, and the times don't seem to scale with credit strangely. Because of that I think I'll up the number of averaged no. of WUs to 10.
Where did you get 2min 55s? Oh wait, I see.... I don't get it, how can the runtime be less than the CPU time? Looking at an LHC cruncher's time, run time is always more than CPU time, as expected and as it should be.
Anyone know what's going on here?
I clicked through some of the invalid tasks, and the tasks were validated by other wingmen, so I'd be inclined to think there's something going on with your machine whether PBO or otherwise. With that said, I've never ran Milkyway CPU tasks, so I'm not sure if there's anything weird with CPU tasks not validating against GPU tasks or anything like that.I'm slowly racking up a small number of invalid WUs on Separation (.04%), while I have 0 invalid for n Body. Is this normal and possibly an issue with my wingman, or am I possibly kicking out a small number of errors with my PBO Curve Optimizer settings?
These separation tasks don't hit the CPU as hard as the N-Body tasks. That means that the host clock increases more because I'm not reaching the thermal or power limits. However, because I have a negative voltage offset, I don't have quite enough voltage for one (or more) of the cores to run as fast as PBO is trying to push it. As far as I can tell, the way to address this issue is to set a per core limit instead of a CPU wide limit, but with 16 cores, some get used more frequently than others, and its hard to say which is having an issue. Looks like I'll be able to spend weeks tweaking this to find the exact sweet spot of the CPU. Not sure if I'm thrilled or dreading that.I clicked through some of the invalid tasks, and the tasks were validated by other wingmen, so I'd be inclined to think there's something going on with your machine whether PBO or otherwise. With that said, I've never ran Milkyway CPU tasks, so I'm not sure if there's anything weird with CPU tasks not validating against GPU tasks or anything like that.
Hmm, very confusing, that's not how it was a couple of years ago or so, IIRC their were only ever single thread tasks, would explain the odd WU times. Maybe it's just not going to be possible to grab benchmarks? Unless the single task times are consistent?What I saw in my Boinc when I first setup milky way was:
A number of 16 CPU tasks
A larger number of GPU tasks
A number of what appear to be single CPU tasks(?)
After initially setting up Milky Way, it would process 1 16 CPU task, 15 single CPU tasks, and 1 GPU task. Now it seems to be alternating between running 2x 16 CPU tasks and a bunch of single CPU tasks. I haven't been watching the host very closely because its on my workbench and it isn't terribly convenient for just taking a peek at.
It's the separation tasks that were used for bench-marking in the past. @Endgame124 , to get accurate benchmarks on the CPU separation tasks, I would suggest opening up your milkyway account preferences and uncheck the N-body simulation mt app and also the GPU separation app so you only get the CPU separation tasks. Running tasks from all 3 apps at the same time just adds a too many variables.Hmm, very confusing, that's not how it was a couple of years ago or so, IIRC their were only ever single thread tasks, would explain the odd WU times. Maybe it's just not going to be possible to grab benchmarks? Unless the single task times are consistent?
Or just adjust a global setting to bring all cores within the envelope? Reduce the negative offset?
That said, if MW is the only one being affected, is it worth it? Although countering that, it's possible this issue might affect a future project, or current with a newer version?
Hmm, very confusing, that's not how it was a couple of years ago or so, IIRC their were only ever single thread tasks, would explain the odd WU times. Maybe it's just not going to be possible to grab benchmarks? Unless the single task times are consistent?
ok, the separation tasks are the ones throwing errors. I reduced the amount of negative offset on my curve optimizer - if I still seem to be generating errors, I’ll remove the offset entirely. If that doesn’t resolve the issue, perhaps the ram is not entirely stable at 3733 (though that is unlikely given I’ve had no errors with other projects.)It's the separation tasks that were used for bench-marking in the past. @Endgame124 , to get accurate benchmarks on the CPU separation tasks, I would suggest opening up your milkyway account preferences and uncheck the N-body simulation mt app and also the GPU separation app so you only get the CPU separation tasks. Running tasks from all 3 apps at the same time just adds a too many variables.
Even EPYC and Xeons CPUs use boost, and I don't think you'll find either manufacturer calling a boost or turbo state as over clocking. Setting Eco mode reduces the PPT, but a number of people have reported success that they can also under volt with Eco Mode enabled. The effect of under volting raises clocks as opposed to saving power - I haven't checked what the minimum PPT setting is on the 5950.This is a factory-overclocked SKU, like it has become standard for many desktop CPU SKUs during the last several years. I am puzzled that you take an overclocked CPU, under-volt it, and then use it for sustained scientific computation.
Are you sure "Eco Mode" is working? In the bios of 2 of my MB, enabling eco mode does nothing. I have to set the PPT manually by plugging in desired number.It seems that I've stopped getting errors on results with Eco Mode enabled and all core offset set to -3.
Enabling Eco Mode changes the PPT and a few other settings on my Asus Crosshair Hero viii (Wifi) with Beta Bios 3102. After enabling it, Ryzen Master showed drops in PPT, TDC, and EDC, and clock speed drops (average around 3.6ghz) and temps drop (average around 42c). On my UPS, I also see power drop to 146w (from 306w with PBO enabled and set to motherboard).Are you sure "Eco Mode" is working? In the bios of 2 of my MB, enabling eco mode does nothing. I have to set the PPT manually by plugging in desired number.
Run time | CPU time | Credit |
(sec) | (sec) | |
3,401.03 | 3,374.72 | 227.51 |
3,407.06 | 3,382.25 | 227.53 |
3,409.95 | 3,380.67 | 227.53 |
3,401.83 | 3,376.89 | 227.52 |
3,401.36 | 3,374.78 | 227.51 |
3,397.35 | 3,376.05 | 227.52 |
3,398.28 | 3,376.27 | 227.53 |
3,399.59 | 3,379.09 | 227.53 |
3,399.46 | 3,377.42 | 227.53 |
3,400.46 | 3,378.69 | 227.53 |
3,401.47 | 3,382.05 | 227.53 |
3,396.47 | 3,376.84 | 227.53 |
3,395.51 | 3,374.06 | 227.51 |
3,397.48 | 3,377.73 | 227.53 |
Win 10 HomeSo about 3400s for your Ryzen 9 5950 @ ~3.6 GHz then, what OS is that out of interest?
Win 10 Home
Yes, PPR is 65w. Full system draw is 146w according to the UPS, idle draw is 68w. Temp is anywhere between 39c and 43c depending on ambient, as I haven’t really played with fan profiles yet.The cpu clock speed is ~3.6 GHz at full load in Eco mode? That's impressive. Is the PPT at 65 watts?
Do you have any AMD material that calls out "factory OC'd" boost values being unstable and unsuitable for serious compute? Also, isn't distributed computing based on the idea of using idle consumer hardware and getting some benefit out of it? It would be a pretty serious change in direction to say that only server class hardware was desirable for these projects.