What type of picture are you talking about? The benchmarks show, irrespective of how big or little the gains are (as some games are fully GPU bound even at 1080p), that the bulk of the gaming performance jump between 10600K and 10900K is due to cache size and not core count.
Your data from the video shows the following comparing 8 core vs 10 core and 20 mb vs 16mb L3 -
140 fps range - no cache difference, no core difference
130 fps range - no cache difference, <0.01% core difference
160 fps range - <0.05% cache difference, no core difference
290 fps range - <0.05% cache difference, 2.5% core difference
160 fps range - 2.5% cache difference, 2.5% core difference
500 fps range (esports title, this is the one you time stamped) - 5.5% cache difference, 3% core difference
Cache and memory sub systems can become a significant factor if you are trying to drive extremely high fps (essentially only a factor competitive esports titles), in which every little bit starts mattering. You've driven the frame times so low at that point that any tiny bit of lowered latency in the entire chain starts to factor in.
CML (or RKL, or etc.) have memory subsystems that can achieve low enough latency with at least respectable memory latency that is already enough for even high refresh rate gaming. Whether or not ADL's is the same or not however is another matter.
I suppose you might counter and point out the 6 core results have slightly greater differences. However here I'd wonder about the methodology and whether or not you can draw the conclusions as simply attributable to cache. Why? Because the lower core count configurations of the higher cache CPUs are also dropping quite a bit once shifting down to 6 cores. This to me suggests something else is in play here.