I agree that it is very worthwhile to search this thread at the Collatz forum, or at other forums like OCN's, for configurations for GPUs that are similar to one's own. Even if you don't end up with the precise optimum set of parameters for your GPU, chances are good that you still get much better results than with the default parameters. Of course if you have the time, you can very easily test out how the parameters affect your particular card. I'm distilling some information about the various parameters from the Collatz forum now. I'll post some actual results from my own GPUs later. Configuration file locations (on Windows, replace the first path components on other OSs accordingly): C:\ProgramData\BOINC\projects\boinc.thesonntags.com_collatz\<app_name>.config During project initialization on your client, empty <app_name>.config files will be created for each of the application versions that match your GPUs. You can enter parameters into these files in order to deviate from default values, and they will be picked up as soon as a Collatz GPU task starts. Configuration file format Plain text file, one "parameter=value" pair per line, unrecognized parameter names are simply ignored (you can use this to comment out some parameters during testing), missing parameters fall back to their default values. Example (suitable for a GTX 1080): kernels_per_reduction=48 threads=9 lut_size=17 sieve_size=30 cache_sieve=1 Parameters cache_sieve default: ? range: 0 or 1 (?) definition: ? kernels_per_reduction default: 32 range: 1...64 definition: "the number of kernels that will be run before doing a reduction. Too high a number may cause a video driver crash or poor video response. Too low a number will slow down processing. Suggested values are between 8 and 48 depending upon the speed of the GPU." comment: "affects GPU usage and video lag the most from what I [sosiris] tested." lut_size default: 10 range: 2...31 definition: "the size (in power of 2) of the lookup table. Chances are that any value over 20 will cause the GPU driver to crash and processing to hang. The default results in 2^10 or 1024 items. Each item uses 8 bytes. So 10 would result in 2^10 * 8 bytes or 8192 bytes. Larger is better so long as it will fit in the GPUs L1/L2 cache. Once it exceeds the cache size, it will actually take longer to complete a WU since it has to read from slower global memory rather than high speed cached memory." comment: "I [sosiris] choose 16, 65536 items for the look up table because it would fit into the L2$ (512KB) in GCN devices. IMHO it could be 20 for NV GPUs, just like previous apps, because NV GPUs have better caching." reduce_cpu default: 0 range: 0 or 1 definition: "The default is 0 which will do the total steps summation and high steps comparison on the GPU. Setting to 1 will result in more CPU utilization but may make the video more responsive. I have yet to find a reason to do the reduction on the CPU other than for testing the output of new versions." comment: "I [sosiris] choose to do the reduction on the CPU because AMD OpenCL apps will take up a CPU core no matter what you do (aka 'busy waiting') and because I want better video response." sieve_size default: ? range: 15...32 definition: "controls both the size of the sieve used 2^15 thru 2^32 as well as the items per kernel are they are directly associated with the sieve size. A sieve size of 26 uses approx 1 million items per kernel. Each value higher roughly doubles the amount. Each value lower decreases the amount by about half. Too high a value will crash the video driver." sleep default: 1 range: ? definition: "the number of milliseconds to sleep while waiting for a kernel to complete. A higher value may result in less CPU utilization and improve video response, but it also may lengthen the processing time." threads default: 6 range: 6...11 definition: "the 2^N size of the local size (a.k.a. work group size or threads). Too high a value results in more threads but that means more registers being used. If too many registers are used, it will use slower non-register memory. The goal is to use as many as possible, but not so many that processing slows down. AMD GPUs tend to work best with a value of 6 or 7 even though they can support values of up to 10 or 11. nVidia GPUs seem to work as well with higher values as lower values." comment: "I [sosiris] didn't see lots of difference once items per work-group is more than wavefront size (64) of my HD7850 in the profiler." verbose default: 0 range: 0 or 1 definition: "1 will result in more detail in the output." Definitions are taken from Slicker's post from June 2015, last modified in September 2015. Comments are taken from sosiris' post from June 2015.