VRAM is bottleneck already - 2-4 years down the road that core performance will be basically the same from the perspective of new cards.
VRAM is indeed a bottleneck today (at least in one game, i.e. Tomb Raider), but shading performance is already a bottleneck in even more games. There are countless games out there were the 6GB 1060 will dip just as low as the 3GB 1060 did in Tomb Raider (45-50 FPS in scene 2), when running at max settings.
Rx470 is faster in scene 2 and 3. Take a second look.
Technically that doesn't invalidate what I said (that the 1060 was 9.5% faster on average in scene 1 and 2), but it's obvious that expressing the difference as an average across both scenes is somewhat misleading.
The exact performance numbers are as follows (it's important to note that the FPS counter is apparently not perfectly synced with the displayed video, seeing as the FPS doesn't drop until 4 seconds into the second scene):
Scene 1 (lasts from 5:01 to 5:33): 1060 ends with an average FPS of 85.8, RX 470 ends with an average FPS of 71.2
Scene 2 (lasts from 5:33 to 5:53): 1060 ends with an average FPS of 72.5, RX 470 ends with an average FPS of 67.0
Scene 3 (lasts from 5:53 to 6:20): 1060 ends with an average FPS of 63.8, RX 470 ends with an average FPS of 62.2
So the exact performance difference would be (when accounting for how long each scene ran):
Scene 1: 1060 3GB 20.5% faster than the RX 470
Scene 2: RX 470 is 17.7% faster than the 1060 3GB
Scene 3: RX 470 is 12.6% faster than the 1060 3GB
May as well play on console...
I would imagine that consoles probably run Rise of the Tomb Raider at something resembling medium settings.
This is the difference from medium to high:
Example 1
Example 2
Example 3
Example 4
Example 5
I would dare say that the jump from medium to high is vastly bigger than the jump from high to very high.
And of course the biggest difference between console and PC still remains the FPS with the 1060 3GB getting an average 60-75 FPS (depending upon texture settings), vs. 30 FPS on consoles.
It is obvious what is preferable-which one keeps the minimums above 60fps. I am not advocating someone should always get the 470, but in this one example the 1060 "ran up the score" on scenes where every card got well over 60fps (aka the 3GB 1060 did like 100 fps and the 470 did 80 fps) while in intense scenes the 3GB card was going down into the low 40s on FPS while the 470 stayed closer to 60 fps (and the 6GB 1060 stayed locked at 60 fps). In this situation I would take the 470 all day as it's the actual best playable experience for someone with a 60hz screen (which is most people).
That is what I keep saying, the averages don't tell the story of the 3GB 1060. Averages hide spikes of sub-60 fps playback in the times when the card goes way above 60 fps (which is useless for gameplay) . We need more reviews that show us minimums to tell the whole story.
Neither the 1060 3GB nor the RX 470 manages to keep minimums above 60 when running with very high textures (even the 6GB 1060 drops below 60 FPS in scene 3). So average don't tell the whole story for either the 3GB 1060 nor the 4GB RX 470.
Also the 1060 3GB didn't really run in the low 40s in the second scene seeing as its average for that scene was 51.2 FPS (vs. 60.3 for the RX 470). The 6GB 1060 managed an average of 71.5 FPS in scene 2.
And yes obviously if scene 2 was representative for the game as a whole then the RX 470 is the better GPU in this game, but honestly I don't know how representative the 3 scenes really are. I have only played the previous Tomb Raider and based on that I would say that scene 3 looks like the most representative, but I'm just guessing.
The GTX460 768MB failed miserably,plus you forget the GTX1060 3GB is a cut down card too
It's true that the 1060 is also cut down, but it is cut down in a very different way from the 460. The 460 768 MB cut down several areas (ROPs, bandwidth and VRAM), but left the processing power alone, this means that if the 460 1GB was a balanced design the the 460 768MB would by definition have been an unbalanced design.
The 1060 3GB cuts down both on the processing side (shaders) and the non-processing side (VRAM) and as such it is possible (at least in theory) that the design remains balanced. Of course given that shaders was cut down by only 10% versus the VRAM being cut by 50%, it seems unlikely that the 3GB would be as balanced as the 6GB version, but I hope that you can still see why this is a different situation from the 460 768MB.
Btw. You wouldn't happen to have any links to the GTX 768MB failing down the line. I've been trying to find some, but have come up empty (guess my google-fu isn't strong enough). I would be interested to see how far the 768MB version fell behind the 1GB version relative to launch.
,and the sad thing is in the UK people are thinking the GTX1060 3GB and GTX1060 6GB will have exactly the same gameplay experience,and some of it is down to people on forums saying the cards are no different. People trying to say the 3GB GTX1060 is a different situation to all those cards I listed are lying - its exactly the same situation and they need to not make false promises to people for a miniscule saving.
Note that when I said that those cards you listed were not comparable I wasn't talking about the 3GB 1060 vs. the 6GB 1060, I was talking about the 3GB 1060 vs. the 4GB RX 470. I agree that the examples you listed are quite comparable to the 3GB 1060 vs. the 6GB 1060 situation.
The reason why I'm focusing on the 3GB 1060 vs. the 4GB RX 470 is of course because those are the cards actually competing with each other.
This is really weird,since Steam says the most common cards are 1GB,2GB and 4GB ones. The GTX970 is the most common 4GB card,so I am not sure why devs would target 3GB - its either 2GB or 4GB.
To be honest, if I had to guess I would suspect that developers are first and foremost focused on what's available on consoles these days (Rise of the Tomb Raider was first released on the Xbox One).
For £30 to £40 it seems really weird not to spend the extra(the same goes for the RX480 8GB). You need to remember the RX480 4GB and GTX1060 3GB are like £190 to £200 here and the higher VRAM versions are £230 onwards,and £200 is kind of the starting point for enthusiast pricing IIRC according to JPR. So over two to three years,it really is not much.
I have not met anybody in 15 years,who wanted to spend £200 on a graphics card,not having £30 to £40 more for a longer lived card.
£200 is like the cost of a Core i5 6600 or a Core i5 6600K.
This is not the £100 market which is far more pricing conscious IMHO.
It may seem weird not to spend the extra cash to you, but I think you are seriously underestimating how price sensitive this segment is.
I don't disagree that consumers would be better of buying a 6GB 1060 or an 8GB RX 480, but consumers often don't do what's best for them, especially not when it comes to things like tech.
Why buy a £190 to £200 RX470 4GB,RX480 4GB or GTX1060 3GB when a RX480 8GB or GTX1060 6GB costs from £230 onwards??
If people want to turn down settings so quickly,they can get a sub £150 card.
Note that a sub £150 card would be the 4GB RX 460, which is vastly slower than both the 3GB 1060 and the 4GB 470 (by a factor of 2 roughly), so that is a slightly silly suggestion.
I dont know what you are looking at but RX 470 4GB is faster in two out of three Scenes of the benchmark.
Scene 1
GTX 1060 3GB is faster.
Scene 2
RX 470 4GB is faster.
Scene 3
RX 470 4GB is faster.
And the only reason the GTX 1060 3GB gets a higher average fps at the end of the benchmark run, is because at the end of the first Scene, the GTX 1060 3GB gets a huge average fps of 89 vs 73 on the RX 470 (that mountain drops fps like there is no tomorrow

) . In the next two more demanding Scenes, where fps are not getting a lot higher than 60 for the duration of the Scene, the RX 470 is always faster.
My previous numbers were based on an average across both scene 1 and 3, and since the 1060 wins by more in scene 1 than what the RX 470 wins by in scene 3 the average ends up favouring the 1060.
I agree that this was a somewhat poor way to look at the issue, so I calculated the exact difference for each scene individually (see above)