You don`t seem to read any of the posts. Try again.
AMD`s slide say the GDDR5 use 35W and with the controller its 85W.
I did read your posts. I am telling you that if you take GDDR5 and start clocking it high, power usage skyrockets over 384-bit and 512-bit buses. Measuring GDDR5's power usage without considering the memory controller is worthless since these 2 components work together. The whole point of the slide comparing GDDR5's efficiency by taking into account the memory controller is exactly because AMD's 384-bit and 512-bit controllers are both larger in die size and more power hungry than the HBM1 implementation. Therefore, in relation to AMD's case, their slide is not marketing fluff as you seem to imply.
Also, you make it sound like HBM2 is some completely different tech from HBM1. AMD decided to spend the R&D and implement it earlier which means for HBM2, they'll have way less work to do than NV. This is just a different approach to adopting new tech - AMD decided to spend the $ now and spend less later and NV did the opposite. What this means is NV will have more risk associated with a new node + new architecture + HBM2 for Pascal, while AMD will only need to deal with new node + new architecture since HBM2 and HBM1 are going to be nearly the same thing, with 1 just faster, but whatever AMD learned with HBM1 will be directly transferred over to HBM2, making this task way easier for them next gen. Again, since AMD also designs APUs, maybe they didn't want to even wait until HBM2 and needed to start the work earlier than NV because a lot more products in AMD's lineup would benefit from HBM memory than in NV's portfolio.
I say 290X GDDR5 is closer to 20W and the total power consumption with the controller is muuch lower than 85W because:
No one is disputing that 512-bit GDDR5 @ 5Ghz uses less power than the same controller over 8Ghz. Your assessment that R9 290X's GDDR5 memory only uses 20W of power is meaningless because without considering the power usage of the 512-bit memory controller, that GDDR5 cannot operate on its own. Therefore, the only comparison that matters for engineers here is bandwidth/watt which is a function of both the power usage of the memory controller
+ the memory type used. You trying to isolate the power usage of GDDR5 from the memory controller is basically irrelevant since in the context of AMD's R9 390X design, their choice was either to keep their power hungry 512-bit memory controller and clock GDDR5 to 7-8Ghz, or go HBM. Not sure, how this isn't' clear to you. Whatever power usage NV's memory controllers have over 256-bit or 384-bit buses is completely irrelevant for AMD's R9 390X design.
Nvidia can do fine with TitanX/980Ti GDDR5 at 336GB/s, and I don`t think thats causing them much less power envelope than with HBM. Which I guess was one of the reasons why they wait with HBM til 2016.
This isn't about NV's architecture but AMD's. NV doesn't sell APUs, AMD does. AMD needs to adopt HBM for other applications, not just GPUs. For that reason, it's a lot more complex than waiting for HBM2 for NV vs. AMD moving to HBM1 earlier. Also, your comment that NV didn't use HBM because it hardly improves power usage does not need to be true. It could be that NV didn't need to invest into HBM1 because Maxwell's perf/watt architecture was enough. With AMD, it's totally different since their architecture isn't as efficient in perf/watt, so they chose to use HBM1 as a method to improve perf/watt since they can't spend 3-4 years redesigning GCN to be 2X more power efficient. Did you ever think of that?
Don`t get me wrong, HBM is great for bandwidth, but for power consumption its sorta like DDR4 over DDR3. Its there but it will not make a significant change. The slide from AMD is exaggerated in power consumption than it really is, to market the HBM better and to boast their GPUs.AMD expects us to believe that their upcoming GPUs needs 570GB/s which is why they clocked the GDDR5 at 2000MHz in the example.
This entire point you just made contradicts your entire viewpoint. If HBM didn't result in massive improvements in bandwidth/watt and didn't reduce the complexity of the videocard/ASIC, why in the world would AMD even use HBM? You think their engineers just decided on some random Friday morning that they will invest into HBM1 with SK Hynix for 1.5 years and waste tens of millions of dollars because HBM1 marketing sounds cooler than GDDR5? You cannot be serious!
Also, your calculations are way off.
512-bit @ 7Ghz = 448GB/sec and AMD's slide at 8Ghz already shows 512-bit controller + GDDR5 at those speeds uses 50W more power than a 512GB/sec HBM1. However, you missed the part of what happens if we go from 4GB GDDR5 over 512-bit bus to 8GB. That 50W will grow even more. So in fact, even if AMD didn't need to use 8Ghz modules on a 512-bit bus and used 7Ghz modules to give R9 390X just 448GB/sec bandwidth, the use of 8GB GDDR5 vs. 8GB of HBM1 would have meant 50W of power usage anyway.
Even NV will get a power consumption reserve by moving GM200 to 6GB which means there is obviously a penalty associated with using more GDDR5 on a 384-bit and 512-bit controller. You haven't even considered this in your discussion since we are not talking about a 390X 8GB part, not just a 4GB part. So your point is moot.
Another thing is that the HBM presented in the slide only have 200GB/s which means the power consumption will be higher than shown with more bandwidth and more stacks.
The slide shows 512-bit @ 8Ghz vs. 4x1024-bit HBM @ 1Ghz = 512GB/sec for both scenarios, not 200GB/sec. That means at 512GB/sec, an R9 390X 4GB with conventional GDDR5 512-bit controller would have used 50W more power than a 4GB HBM1 version of the same videocard. How are you not understanding that slide? It can't be more clearer. Double the memory and power usage is even more.
Also, it's amazing how you think AMD is using HBM1 as mostly a marketing move because NV got away with not using HBM -- ignoring that NV's and AMD architectures are different, ignoring the possibility that AMD's R9 390X might be as fast or faster than the Titan X at a much smaller die size?
Your analysis ignores that there are way too many factors involved in GPU design here with the move to HBM - reduced PCB complexity, reduced memory controller complexity --> reduced GPU die size --> experience gained for APUs by adopting HBM1 earlier, etc.
Just because NV decided to wait for HBM2 doesn't mean HBM1 is mostly a marketing exercise, with small power consumption reduction and little other benefits. I mean it's remarkable you would think you are smarter than 1000s of engineers who get paid 6 figures at AMD and know GCN architecture better than everyone on this forum combined.
This continuous theme of downplaying any newest technology / advantage that AMD embraces has been around for a long time on these forums. If you are going to provide counter-arguments why it's not that great, at least have a stronger argument.