But but but HBM is like cache and 4 GB isn't limiting at 4k! Basically AMD will have to work the same kind of driver bs NVIDIA did with 970 to manage the limited VRAM. Except 970 is a mid tier card and this is AMD's best.
It can be like 'cache' all it wants but capacity is capacity.
After watching The TechReport podcast, which was quite interesting:
https://www.youtube.com/watch?v=28CECF_Cieo
I'm real curious to see how the FuryX's huge shader array can be used to it's advantage. They discussed it around the 37 minute mark.
The big sticking point was divergent technologies, NVidia specializing in Geometry output, and AMD specializing in Compute and Shader throughput. The former being best for right now, and the latter being better for tomorrow, allegedly.
VR being a huge upcoming market and a lot of VR companies utilizing LiquidVR and it's compute centric approach to problems. This might become THE card for VR.
Reviews doing Apples to Apples comparisons using the exact same settings possibly aren't playing to the strengths of each card. MSAA vs. SMAA for example.
Now this does get fuzzy in how to directly compare the cards, but I just thought it was quite an interesting point in regards to the disappointing showings of the FuryX.
Perhaps. But I though VR involved drawing two slightly different scenes (or one scene two ways) for each eye. Lots of CPU overhead and stress on the front end.
This looks again like a Pitcarin/ Tahiti blunder. Just compare the 270X vs. 280. The 280 has 24% more raw GFLOPS (3.35 vs. 2.7 TFLOPS) yet is only ~15% faster.
http://translate.google.com/transla...se.de/2015-01/nvidia-geforce-gtx-960-im-test/
The 280X has 4.096 TFLOPS (+ 52%) yet according to the last TPU benchmark is only 24% faster.