Not at all,
GPGPU performance depends entirely on GPU compute utilization. Utilization depends on the code and how it relates to the architecture.
In games, compute shaders are used in order to run small programs (work items) on the GPU. When a GPU begins work on a work item, it does so by executing kernels (data parallel program), kernels are further broken down into work groups and work groups are broken down into wavefronts (or warps).
So the work groups are segmented into wavefronts (GCN) or Warps (Kepler/Maxwell).
The programmer decides the size of the work group and how that work group is split up into smaller segments is up to the hardware to decide.
If a program is optimized for GCN then the work groups will be be divisible in increments of 64 (matching a wavefront).
If a program is optimized for Kepler/Maxwell then the work groups will be divisible in increments of 32 (matching a warp).
Prior to the arrival of GCN based consoles, developer's would map their work groups in increments of 32. This left GCN compute units idling and not being utilized in every CU.
Your Octane renderer is probably a relic of that past. It is no longer relevant. Games are now arriving with GCN centric optimizations.
Under those scenario's, Kepler is under utilized to a large degree. This is due to the way the CUDA cores in the SMXs were organized (192 CUDA cores per SMX). NVIDIA took notice of this and reduced the amount of CUDA cores in each SM to 128 for Maxwell's SMM and segmented those 128 CUDA cores into four groups of 32 CUDA cores (mapping directly to a Warp).
So yes, how an application is optimized, written, largely determines performance.