IMHO the very fact that any games are still CPU bound at all is a testament to just how badly optimised PC games are in that regard.Apart from worst case scenarios like Far Cry, I can't tell the difference between my R5 3600 and 8700K @ 5GHz either. Then again, I choose to game with IQ settings maxed or close to, so the GPU (5700XT) is the main bottleneck in almost all cases.
I'm not gonna put my head in the sand though and pretend that if I upgrade to Big Navi or GeForce 3000 series with +50% performance over current GPUs that it would be the same scenario.
AMD has done a fine job to this point in improving gaming on each Zen iteration, but if Ryzen 4000 actually overtakes my 8700K for gaming I would upgrade in a heartbeat.
Truthfully I'm not holding my breath on that, if GN's latest CPU bound gaming tests on the 3600XT vs 10600K is anything to go by, AMD is likely more than another iteration away from dethroning Intel at gaming.
The problem is that by the time AMD actually has Skylake beating gaming performance, you'd think Intel would be done milking their dead 14nm cow...
Yes, I'm aware that in 'blind test' scenarios like you described the difference might be negligible in most cases, at least with current gen GPUs. But that's a rather simplistic stance to take IMO, as the balance between being CPU bound and GPU bound can easily shift depending on the settings you use, and as faster GPUs come out in the coming years we'll ideally have 'faster-than-Skylake' CPUs as well to keep up the pace.
You would hope that a mostly standardised feature set with only a handful of well optimised common game engines (Unreal/Unity/CryEngine) would have erased this problem by now.
It seems like those engines need some kind of LTS version which freezes feature set and concentrates on nothing but optimisation - a strategy I believe MS could also benefit from following with Windows 10 for that matter.