MilkyWay@H - Benchmark thread Winter 2016 on (different WU) - GPU & CPU times wanted

Discussion in 'Distributed Computing' started by Assimilator1, Dec 29, 2016.

  1. TennesseeTony

    TennesseeTony Elite Member

    Joined:
    Aug 2, 2003
    Messages:
    1,746
    Likes Received:
    286
    Finally some info on the card. :) It's a fresh Windows 8.1 install with the FirePro driver only. Man, has a LOT of air moving through it, but still toasty and warm.

    EDIT: FAHBench won't run, missing some .dll, says to reinstall, but that hasn't helped.

    [​IMG]
     
    #101 TennesseeTony, Mar 31, 2017
    Last edited: Mar 31, 2017
  2. Orange Kid

    Orange Kid Elite Member

    Joined:
    Oct 9, 1999
    Messages:
    3,082
    Likes Received:
    132
    Added a link to this thread in the Project List thread under MilkyWay@Home
     
  3. crashtech

    crashtech Diamond Member

    Joined:
    Jan 4, 2013
    Messages:
    6,463
    Likes Received:
    199
    I had a question about cpu_usage that probably doesn't deserve its own thread. I run Milkyway@Home GPU app only, and have an app_config that looks more or less like this on most of my M@H machines:
    Code:
    <app_config>
    <app>
    <name>milkyway</name>
    <max_concurrent>0</max_concurrent>
    <gpu_versions>
    <gpu_usage>.33</gpu_usage>
    <cpu_usage>.33</cpu_usage>
    </gpu_versions>
    </app>
    </app_config>
    The thing is, there doesn't seem to be any change in behavior when altering the cpu_usage parameter from very small fractions up to 1. On my PCs, the process will take however much CPU it wants, when it wants, regardless of what app_config says. gpu_usage, on the other hand, seems to work up to a point, from 1 task to 4. Am I doing something wrong? What is cpu_usage supposed to do?
     
  4. iwajabitw

    iwajabitw Senior member

    Joined:
    Aug 19, 2014
    Messages:
    503
    Likes Received:
    81
    I see the same thing on my AMD cards down to .05 for the CPU in MW. NVidia on the other hand I give a full CPU in app_config because I have seen those times get affected by CPU tasks running also.
     
  5. StefanR5R

    StefanR5R Senior member

    Joined:
    Dec 10, 2016
    Messages:
    310
    Likes Received:
    134
    My understanding is that gpu_usage and cpu_usage are not parameters to be passed to the application, but an advisory to the scheduler. The scheduler obviously needs to be told beforehand how many resources a queued task will take. Only then (and together with project priority, deadlines etc.) it can decide which tasks to start or to keep waiting or to suspend.

    Now, gpu_usage and cpu_usage can be totally off of what the real usage will be. You already use that by setting gpu_usage to .33 to launch three tasks on the same GPU, even though a single task will take maybe .8 or whatever of the GPU if launched alone. (That is, you oversubscribe the GPU but gain better utilization, up to a point.)

    Same with cpu_usage: It's make-believe towards the scheduler. With cpu_usage = .33 for example, the scheduler is made to believe that three feeder processes will be adequately served by one logical CPU.

    If you set cpu_usage high, the scheduler might refrain from launching as many tasks as you intended, because the scheduler figures that the CPUs would be oversubscribed by another GPU job. Vice versa, if you set cpu_usage low, the scheduler will more likely see room for another GPU job and possibly oversubscribe the CPUs that way. (Just like a low gpu_usage causes the GPUs to be oversubscribed.)

    As iwajabitw remarked, oversubscribing the CPUs can be OK on AMD cards, but may quickly be detrimental with NVIDIA cards.

    Again, gpu_usage and cpu_usage do not influence the application directly. They influence the BOINC client's scheduler in its decisions, and thereby have (only) an indirect impact on the application due to GPUs/ CPUs being over/undersubscribed.

    That's at least what I understood from my experimentation so far. The documentation that I found was a bit sparse.
     
    crashtech likes this.
  6. crashtech

    crashtech Diamond Member

    Joined:
    Jan 4, 2013
    Messages:
    6,463
    Likes Received:
    199
    @StefanR5R , that makes sense, thanks. So for my little dual-core machines that only run n identical GPU tasks on a single GPU, cpu_usage would not really seem to matter at all. It only is useful as a resource allocation tool to help with multiple projects, and in this role will require intimate knowledge of the behavior of the specific CPU(s)/GPU(s) combination to know what value to set, does that sound right?
     
  7. StefanR5R

    StefanR5R Senior member

    Joined:
    Dec 10, 2016
    Messages:
    310
    Likes Received:
    134
    I think it is important in both cases - only applications of a single kind on the box, or various different applications and projects active together. Though in the former case, it is far easier to predict how the scheduler will react on a particular cpu_usage setting. In the latter case, additional factors come into play, e.g. the importance that the scheduler sees in each project.
     
  8. TennesseeTony

    TennesseeTony Elite Member

    Joined:
    Aug 2, 2003
    Messages:
    1,746
    Likes Received:
    286
    Since the change in MW tasks, (it's actually 5 tasks in a row, per 'zipped' downloaded task) each GPU thread seems to use to to an entire thread/core toward the end of each of the 5 tasks (20% mark on progress bar) for several seconds. So each compressed task will use 100% CPU at 5 different points along the progress.

    I personally use the .05 CPU setting, because as Stefan says, it tells BOINC to go ahead and start another GPU task. This is more important when running CPU tasks, because I've seen BOINC NOT start a GPU task, because it didn't have a free CPU core/thread available. CPU task can wait BOINC! Gimme the GPU points BOINC!

    In your case, single card, I think you have the best settings in App_Config, as long as nothing else needs CPU. Try to get the tasks to stagger out a bit, so they don't all hit the CPU at the same time, by letting them run for 15 seconds or so, then suspend one of the three tasks (a new one will start in it's place).

    Worst case scenario, run two tasks. Not too big of a ppd hit, and still MUCH better output than single task.
     
  9. crashtech

    crashtech Diamond Member

    Joined:
    Jan 4, 2013
    Messages:
    6,463
    Likes Received:
    199
    Such good info, guys, I'm a little slow at getting my head around this stuff, and do appreciate the hand-holding.

    Do you mean by using max_concurrent value of 2, or something else?
     
  10. TennesseeTony

    TennesseeTony Elite Member

    Joined:
    Aug 2, 2003
    Messages:
    1,746
    Likes Received:
    286
    <gpu_usage>.33</gpu_usage>

    Change that to .5

    1card /2tasks= .5
    1card /3tasks= .33(repeating)

    EDIT:

    Ha ha! beat ya by half a second iwajabitw!

    Personally I deleted the max concurrent line.
     
  11. iwajabitw

    iwajabitw Senior member

    Joined:
    Aug 19, 2014
    Messages:
    503
    Likes Received:
    81
    GPU setting of >.5< is 2 tasks per card.

    Edit: Same time Tony!
     
    TennesseeTony likes this.
  12. crashtech

    crashtech Diamond Member

    Joined:
    Jan 4, 2013
    Messages:
    6,463
    Likes Received:
    199
    Yeah, that much I have figured out, in fact my example shows the config for three tasks. I wasn't sure if something else was meant.
     
  13. iwajabitw

    iwajabitw Senior member

    Joined:
    Aug 19, 2014
    Messages:
    503
    Likes Received:
    81
    My dual 280x's are currently only doing two tasks, gpu temps were in the high 90C's.

    Edit:
    Single 280x is CPU & GPU set to .331 with Core2 Duo
    Dual 280x's are .5 on both.
    Dual GTX 980's are at CPU 1 and GPU .5 on Einstein.