My understanding is that gpu_usage and cpu_usage are not parameters to be passed to the application, but an advisory to the scheduler. The scheduler obviously needs to be told beforehand how many resources a queued task will take. Only then (and together with project priority, deadlines etc.) it can decide which tasks to start or to keep waiting or to suspend.
Now, gpu_usage and cpu_usage can be totally off of what the real usage will be. You already use that by setting gpu_usage to .33 to launch three tasks on the same GPU, even though a single task will take maybe .8 or whatever of the GPU if launched alone. (That is, you oversubscribe the GPU but gain better utilization, up to a point.)
Same with cpu_usage: It's make-believe towards the scheduler. With cpu_usage = .33 for example, the scheduler is made to believe that three feeder processes will be adequately served by one logical CPU.
If you set cpu_usage high, the scheduler might refrain from launching as many tasks as you intended, because the scheduler figures that the CPUs would be oversubscribed by another GPU job. Vice versa, if you set cpu_usage low, the scheduler will more likely see room for another GPU job and possibly oversubscribe the CPUs that way. (Just like a low gpu_usage causes the GPUs to be oversubscribed.)
As iwajabitw remarked, oversubscribing the CPUs can be OK on AMD cards, but may quickly be detrimental with NVIDIA cards.
Again, gpu_usage and cpu_usage do not influence the application directly. They influence the BOINC client's scheduler in its decisions, and thereby have (only) an indirect impact on the application due to GPUs/ CPUs being over/undersubscribed.
That's at least what I understood from my experimentation so far. The documentation that I found was a bit sparse.