Like Im gonna bother replying to all of that. No thanks. Didnt even read it all.
Here is a summary:
1. You think you are smarter than 1000+ AMD engineers who were too stupid to spend millions of dollars and waste 1.5 years of their lives adding HBM1 memory cuz the
marketing team thought it would be a brilliant idea. Obviously NV's engineers are 10X smarter since they wouldn't waste 1.5 years of their lives and millions of dollars on a pure marketing exercise.
2. You compare NV's choice for GDDR5 and ignore everything about how GCN works, or that AMD needs HBM for APU designs too. In fact, you ignore the most important comparison --> AMD moving from its own 512-bit controller to HBM but instead try to downplay HBM1's advantages by using NV's method of having sufficient memory bandwidth on 384-bit controller for Maxwell. It never occurs to you at all that AMD can't just go and pick up NV's L2 cache efficiencies and NV's 384-bit controller and shove it into its GCN chip. Therefore, all of your comments miss the most important element -> how would HBM1 benefit AMD specifically if they dropped their older 512-bit controller that likely would have never been able to efficiently work with 7Ghz GDDR5 to begin with. You just assume that pairing 7-8Ghz GDDR5 with AMD's 512-bit bus is a done deal. It never occurred to you that it took NV 2 years and a full new architecture to solve GDDR5 clock speed issues of Fermi to Kepler. It's possible that instead of spending millions of dollars to re-design a new 512-bit controller that could even run 7-8Ghz, AMD decided this is a total waste of money since GDDR5 is dead end for flagship cards for them. So might as well invest into newest tech!
3. You ignore that even at 448GB/sec memory bandwidth, GDDR5 would already need to be clocked at 7Ghz, which isn't far off from AMD's 85W usage at 8Ghz. That means by all accounts, HBM1 would be miles more efficient when taking at the full package: memory controller + memory type power usage. By you focusing primarily on the power usage of GDDR5 only, you keep ignoring this critical point. And that slide was discussing 4GB option, not 8GB, which makes GDDR5 over 512-bit an even worse decision as you start increasing VRAM amount. As you try to downplay the power consumption differences, you ignore that HBM likely becomes even more efficient the larger the VRAM pool becomes. It's no wonder to get faster clocked GM200, that NV will drop Titan X's 12GB of GDDR5 or 6GB. It only makes sense because extra GDDR5 wastes power.
4. You ignore that the less complex PCB and memory controller on the GPU are added benefits of going HBM1. If AMD's 390X manages to tie or beat Titan X with much smaller die size than 600mm2, what will be your response -- that AMD would have been better off waiting for HBM2 and using GDDR5? ^_^
5. You ignore that HBM1 and HBM2 are very similar, which means choosing to go HBM1 earlier or HBM2 later is a matter of when to invest cash flows as part of your business strategy, not whether one approach is better than the other. You can't comprehend that both approaches are correct, depending on the GPU architecture but since you constantly want to compare NV to AMD, you don't understand how the designs behind GCN and Maxwell are different -- hence you cannot understand why AMD's engineers decided to use HBM earlier than NV. If they did, it was because it made sense for 390X GCN and it didn't make sense for Maxwell.
Yes, as RS was using those measurements to extract an imaginary performance clock over and above Titan, via a thermal envelope.
And I wasnt talking about dual chip cards with CLC.
My point about reference design had nothing to do with performance of 390X over the Titan X. All things being equal, I prefer after-market or warrantied AIO CLCs and after-market VRM/MosFET components of Asus Strix/MSI LightningGaming, etc.. I even said in my post for that reason alone GM200 6GB (as well as 390X) has the potential to trounce the Titan X as the better products. Blowers have already shown as the of the line for maximum performance above 250W because they fall apart above that power usage because they cannot manage good temperatures and quiet noise levels simultaneously. Good thing NV also realizes this and will give us after-market GM200 6GB products!