- I am really curious to see how these reviews pan out. Where is this supposed 25-30% performance increase coming from?
Maybe it's just sustained boost clocks, maybe AMD got some of their secret sauce working... We'll find out soon enough.
Likely from doubling the memory bandwidth. There was some analysis of memory bandwidth and I think they found that doubling it generally can bring ~25% performance improvement on its own (for GPU related tasks). Some variance and its possible that particularly memory bound situations would benefit more. 16GB of 1TB/s memory would be awesome for very high quality texture assets (there was speculation that games might look to that as a way of pushing quality forward, where they'd pair it with like more advanced bump mapping as a way of making it look like more complex model but using less geometry - and therefore offering higher quality on lower end GPUs; I actually think that was maybe AMD's aim, and why they let their geometry throughput languish thinking that HBM would let them push bandwidth and possibly memory capacity to where that would be possible; I believe that was what AMD's Tessellation stuff was about early on, it was like a way of taking bump mapping to the next level, and when paired with adequate textures wouldn't need GPUs to scale as much in geometry and since higher quality textures were already being called for, that it might be a way of making things look especially good ; Rage I think was kinda designed around accomplishing something like that as well, and integrating the textures so that it would help remove edges and other things that would disrupt a coherent image; and I think it would make culling more effective too, since textures would hide a lot).
Due to AMD's history, I won't be surprised to find that people will be able to drop voltage and get power down to 250 or even closer to 200 W with little impact to performance (maybe drop clock speeds a bit), and possibly even improve it over the course of gaming (by it enabling it to hold higher clocks for longer periods). Will be curious to see if people might be able to overclock and get close to 2GHz (at stock or close to stock voltage) using watercooling.
I absolutely don't expect it, but if people are able to drop voltage and overclock, this might turn out to be a pretty decent gamer card after all. Still not something I'd buy, but would make it more appealing for those that would, and that should enable it to hit 4K at solid framerates.
Ugh, too bad they're gimping it with HDMI 2.0. This card would be a lot more appealing with HDMI 2.1 as at least it'd offer something for people that will buy HDMI 2.1 display this year, and would help it with 4K and Freesync/VRR capable stuff.
Also, why are they sticking with 3 DP connectors? I could see one, and one HDMI, but they could've had 4 miniDP or even better USB-C (displayport) capable ports instead of 3 DP (and mini-DP to DP doesn't need anything crazy for adapters does it? So just include at least one adapter). I absolutely hate this particular layout though (my reference RX 480 has it, and I've had to get multiple cables/adapters because I didn't have any DP monitors, I did have a miniDP to HDMI cable from when I had a Surface Book though; mine was made a bit worse since I'm currently stuck using two monitors with DVI after my HDMI one burnt a cap or something). I'd prefer 2DP and 2HDMI even.
I hope I'm not gonna find it impossible to get a video card with 2 HDMI 2.1 connectors (and whatever beyond that, although that with a couple of USB C 3.1/Thunderbolt 3 connectors would be optimal for me) anytime soon. I wish they'd break the I/O ports out (so you could adjust where they are, and it'd free up the slots for venting heat from the GPU). Some used to do that (like the half height cards), and it'd allow more flexibility so you could get what ports you need. You could swap out the port selection (or keep what ones you have already). It would help for stuff like transitions to new port specs (HDMI/DP, etc) as well. And with it looking like USB-C Thunderbolt will become a popular one, and it needing more power, it having its own PCIe slot and separate power would benefit GPUs, where their boards won't need to be tasked with that extra draw.
I'm hoping AMD integrates some manner of display controllers in the I/O part of Ryzen, so that you could use motherboard display I/O with or without an actual GPU (so you could get video decode/encode, GUI handling, you just wouldn't be able to game). And it would mean your ports wouldn't go to waste there).