StereoPixel
Member
- Oct 6, 2013
- 107
- 0
- 71
The only sold HBM1 atm is 1GB/module at 1Ghz.
But AMD may use the custom version of HBM1 Dual-Link with 2GB or 4GB/module at 1250 MHz.
The only sold HBM1 atm is 1GB/module at 1Ghz.
I think, that's per module!
No you dont know what you are talking about.
HBM1 supports 1GB per stack. Or 2Gb (bit, not byte) per DRAM. You can stack 4 of these DRAMs together max to make one stack.
![]()
We have already seen the leak on gfxbench. Its 4096bit. And how can it be that with 8GB HBM1? Two controllers with each accessing their own 4 * 1GB stacks. One controller per GPU die. Two dies working as one.
Not 8 stacks giving 8096bit bus or 4 stacks with 2GB because that isnt supported with HBM1 which we know 390X will have
So you are saying it has to be dual GPU for 8GB?
No you dont know what you are talking about.
HBM1 supports 1GB per stack. Or 2Gb (bit, not byte) per DRAM. You can stack 4 of these DRAMs together max to make one stack.
![]()
We have already seen the leak on gfxbench. Its 4096bit. And how can it be that with 8GB HBM1? Two controllers with each accessing their own 4 * 1GB stacks. One controller per GPU die. Two dies working as one.
Not 8 stacks giving 8096bit bus or 4 stacks with 2GB because that isnt supported with HBM1 which we know 390X will have
Did you even watch the slide i posted?
hmmm, would a gpu connected via an interposer need to use AFR/SFR etc. techniques like ones connected via pcie[meaning possible scaling issues] or is it more like all the cores are on die?
2. Again, the two Tonga cores would not be connected through Crossfire. AMD should fire all their engineers if they couldnt connect two cores on the same die, with an internal connection that goes directly from one core to the other. If its through TSV or through L2 cache by routing crossbars I don`t know, but if they can do it with CPU cores and IGP, they can do it with dual GPU as well. This should have zero performance hit.
3. HBM1 have a limitation of 4GB. HBM2 wont be ready until 2016. Still AMD`s slide say "up to 8GB" for the 390 WCE. Dual controller like shown in the dual core picture anyone...?
http://www.fudzilla.com/news/graphics/37566-two-amd-fiji-cards-coming-in-june
FIJI XT faster than GTX980 slower than TITANX
FIJI VR some dual card..![]()
According to Fudzilla Fiji is a dual GPU card anyway. Now there are two cards? Fuad covering all the bases.
yeah that guy is so freaking hilarious. he is utterly clueless and the site is a trash clickbait site.![]()
http://www.fudzilla.com/news/graphics/37566-two-amd-fiji-cards-coming-in-june
FIJI XT faster than GTX980 slower than TITANX
FIJI VR some dual card..![]()
As much as I`d like to believe this, but having two huge die`s with big TDP on one card? One thing is 2x 2816shaders on one card (295X2) but 2 x 4096shaders seems impossible. 2 x cut down Fiji with 3500shaders may be theoretically possible but to me it seems like a stretch. Not just TDP wise but also with power supply available and having two enormous die`s on one card.
Its like Nvidia using 2 x Titan X on one card. I think they used 2 x GK104 for a reason. I think 2 x GM204 is the line here as well.
If AMD does make dual die work like Intel did with CPUs, then we could see the huge performance boosts akin to multi-core CPUs around the Q6600 era. Even if AMD gets a year or so ahead of Nvidia on this type of technology it could be what they need to get back in the market. It would open a new path for development exclusive to AMD for some time instead of chasing nvidia the traditional way.
I'd be surprised to see it happen, but would be excited to see how it shakes up the market.
I thought about that kind of performance boost and advancement, but I don't think it translate here like it did with CPUs. GPUs are already essentially a multi-thread architecture, broken into many tiny cores, whereas multi-die CPUs were really the advent huge leaps in performance in multi-core and multi-threading x86 architecture. I don't think multi-die GPU translates to the same performance advancements. I could be wrong, however.
