• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Dual Fiji Card May Finally Be Here Soon

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
What makes you think next-gen there won't be one huge GPU that has a 250W TDP?

Put two of those on one card, it'll still be power hungry. Take the 980Ti for example, custom models are drawing ~250W, put 2x GM200 on a card and with a good cooler so it won't throttle, you have one power hungry card.

But that's not the point. Top end GPU are power hungry. It's all about the performance because people who go for these setups, need that performance to drive 4K maxed.
 
What makes you think next-gen there won't be one huge GPU that has a 250W TDP?

Put two of those on one card, it'll still be power hungry. Take the 980Ti for example, custom models are drawing ~250W, put 2x GM200 on a card and with a good cooler so it won't throttle, you have one power hungry card.

But that's not the point. Top end GPU are power hungry. It's all about the performance because people who go for these setups, need that performance to drive 4K maxed.
I mean ya you summed up why this card needs to come out then. Great for you guys who want 2 cards.

I want 1 card, 2 gpus, Wc aio, single slot and amd is continuously to deliver and refine so good for them. Arctic islands is where I'll buy in although I'm tempted to buy the dual fiji card when it's on sale for less and I start playing games from 2010-2013.
 
AMD's current dual GPU card, 295X2, was released at $1500. Because of the catastrophic failure that was the Titan Z, AMD owned the dual GPU market this generation. Despite that, less than a year later it was selling for $600 after $30 rebate:

http://forums.anandtech.com/showthread.php?t=2422332&highlight=295x2

And as you can see, no one cared. 60% off a card that is still the fastest single card in some games today, and no one cared.

That has a lot to do with the perception of AMD's entire R9 200 series. Even if R9 295X2 cost $299 when GTX980 was $550, it still wouldn't have outsold the 980.

Generally speaking if you are going to say that R9 295X2 was a card no one cared about, might as well draw the line and say that all dual-chip cards starting from NV 295X2 and AMD's HD3870X2 are a waste of time and money. As you said even when R9 295X2 was $600, it was hardly popular. I am of the view that dual-chip cards hardly ever made sense. HD5970 was CRAM crippled and had no tessellation performance worth talking about. GTX590 had a horribly underpowered/cheap VRM system that resulted in those cards blowing up. GTX690 was VRAM gimped making it DOA long-term and Kepler driver crippled over time. HD6990 was unbearably loud. HD7990 had CF frame issues and cost more than buying 2xHD7970Ghz cards stand-alone. Titan Z was a thermal throttling mess and cost 2X more than R9 295X2 that was faster! The only card in recent times that was actually any good was the R9 295X2 but its $1500 price made it still a bad value.

The other issue with the Fiji X2 is that for 4K it likely won't support HDMI 2.0 and 4GB of VRAM makes it a poor buy long-term. Also, the timing doesn't work since we are near the end of 2015 and 2016 brings 16nm HBM2 cards.

I don't care what the rumors will be for the next generation cards, if a 980Ti can be had for $260 next March, the internet will explode.

Current rumors have Pascal possibly launching in April 2016 at the earliest. So how in the world would a 980Ti cost $260 by March 2016?

It doesn't matter how little you think this cost AMD and will cost them in support, they wasted money developing it and should focus their efforts on main stream products.

It could be much more complex than that:

1) The foundation of a dual chip 596mm2 HBM1 on 1 board will give AMD the experience they need for their next generation HBM2 dual-chip cards in 2016-2018. There for sure will be some learning here.

2) AMD has the WSA with GloFo which means they do have to buy a minimum number of wafers of they face major penalties. Not only is Fiji the largest chip they make, Fiji likely has the highest profit margins out of any card they make. The types of customers who pay > $600 for GPUs tend to be price inelastic. That would allow AMD to sell more Fiji die in pairs. You say that it's a waste of $ which could be true but since AMD has been selling HD3870X2->R9 295X2, that means these cards must be making some money for them to keep using this strategy?

3) AMD wants to move away from the "budget" brand image in graphics. To do that, the halo flagship card such as the Fiji X2 will strengthen their brand image. Even if the R9 295X2 didn't sell in many units, there is no doubt that it provided a positive brand image for AMD - 1st ever reference GPU with AIO CLC, the most powerful card of last generation, cool, quiet. IMO, the R9 295X2 marks the turning point in AMD's brand image even though the market tends to be 2-3 generations behind in perceptions, much like "poor quality of American cars" still persists for many consumers.

4) With the market for miniITX cases gaining in popularity, even though I still think dual-chip flagship cards are a niche, I think they are starting to make more and more sense in 2015 than they did in say 2010-2013.

Before, we didn't even have 700W small form-factor PSUs. Who knows how quickly the miniITX market will gain in popularity over the next 5 years but if you don't take risks, you miss the growth.

5) Playing the devil's advocate: if AMD is wasting $ making dual-chip flagships cards, why has NV been making those cards as well since 295X2? Since we have sufficient data to prove your point correct, I am going to say that overall those cards must make sense which is why both AMD and NV keep making them almost every generation.

Look, I agree with you that dual-chip flagship cards don't make much sense but I am still open to the idea that a small market exists for ultra enthusiast games who buy these things. Similar to how I think all Titan cards are a giant waste of $, the market keeps defying logic which means that for some customers knowing they have the best of the best is all that matters, within a 'reasonable' $1000-1500 price range. If Fiji X2 is a $1000-1500 card, it'll find its buyers. I mean think about it in March of 2015 we had $1000 Titan X but if AMD launches a $1000-1500 Fiji X2, it's already better than the Titan X and not even a year has passed. It's already progress as far as comparing the maximum performance on a single card -- the same for dual GM200.

@RussianSensation
I think vram usage has peaked, at least for awhile. Games are now developed with console specs as the driver and these consoles will remain the target for several more years at least. For the PC, we're seeing the optional 4K textures and 4GB GPUs handle SoM, Ryse, Evolve just fine, even at 4K resolutions.

The only setting that pushes vram beyond 4GB at this point is 4K + 4x MSAA or supersampling 4K+, at which point the GPU grunt has run out long before vram.

To an extent I agree with you. It seems neither the Fury X nor the 980Ti have the GPU power required to fully benefit from more than 4GB of RAM. Over the next 2 years the amount of VRAM will explode as we move into 8-16GB HBM2 cards so really in the context of Fury X and 980Ti, I presume people who buy them are almost always upgrading often (i.e., cutting edge top 5% of PC gamers).

There's a place for dual-GPU cards, as long as its cheaper than 2 separate GPUs, ie. $1200 or less for Fury X2. If its priced more than 2 separate cards, the use scenario shrinks to be really niche.

Ya, that's the part where I agree with you that Fiji priced at $1100-1300 makes sense but if they charge a premium to $1500+, it starts to make less and less sense. Pricing is what made R9 295X2/Titan Z less than stellar.

Still, I am not against a Fiji X2 but I think AMD is prioritizing the extreme high-end while ignoring:

1) No R9 370X outside of China
2) No R9 380X
3) No new cards below $150 now that R7 265/260X/270/270X are EOL
4) No binned 3584 Fury for laptops in the sub-150W space
5) Very few AIBs are even selling the standard Fury non-X -- just XFX, Asus and Sapphire. Why?
6) AMD stopped bundling games after Dirt Rally with their cards while some NV's AIBs continue to use this strategy.

I just think AMD is prioritizing the wrong products but then again I don't know what their WSA speculates which might require them to buy a lot of wafers and then they are basically stuck.

I mean ya you summed up why this card needs to come out then. Great for you guys who want 2 cards.

I want 1 card, 2 gpus, Wc aio, single slot and amd is continuously to deliver and refine so good for them. Arctic islands is where I'll buy in although I'm tempted to buy the dual fiji card when it's on sale for less and I start playing games from 2010-2013.

What exactly is the point of that? What kind of a modern case and PC customer willing to buy a $1000+ dual-chip flagship card will have issues with a card taking up 2 slots?

The entire single slot thing mattered 10 years ago when we had sound-cards in PCI slots that blocked airflow for the GPU. With the AIO CLC, the dual slot makes no difference since you can put a 2nd dual-slot GPU or a sound-card into the 3rd PCIe slot right below.

I am legitimately interested how there is a benefit to having a single-slot AIO CLC card? With AIO CLC, you can squeeze 3-4 cards as tight as possible since GPU airflow doesn't matter anymore. All that matters if finding space to fit the AIO CLCs.

hqdefault.jpg
 
Last edited:
It may not have cost them a lot of money comparatively speaking, but what might it have cost them in time? AMD has not been a very punctual company with releases in recent years. The engineers stuck on this waste of time, may have helped getting their other products out earlier.


My guess is this isn't a giant engineering task. Take two off the shelf parts and make a PCB and cooler for them. I think AMD's biggest issue with this part is that this will be a lot of GPU horsepower for 4GB of vram. Something like this might be coupled with 4k displays, multiple displays, etc.
 
The 295X2 was properly cooled. 2x 290X's, even custom ones would throttle unless you had exceptional airflow through your case.
 
If i was AMD i would make the Dual Fiji a 300W TDP air cooled card.

Dual Nano @ 350W and AIO. Cool quiet and fast. Move the power slider to the right to remove power limiting and gain an additional 10%-15% performance. Easier than O/C'ing and the AIO cooler will keep everything cool and quiet.
 
It's too late... End of November/Early December?
This is what makes me think Arctic Islands will be coming out significantly after Pascal (like months afterwards I mean).
 
It's too late... End of November/Early December?
This is what makes me think Arctic Islands will be coming out significantly after Pascal (like months afterwards I mean).

Or you're just overly optimistic on the release date of Pascal.
 
It's too late... End of November/Early December?
This is what makes me think Arctic Islands will be coming out significantly after Pascal (like months afterwards I mean).

I dont believe we will see a single GPU card that even comes close to this performance (Fiji X2) the next 12 months or more.
 
I dont believe we will see a single GPU card that even comes close to this performance (Fiji X2) the next 12 months or more.
So you don't expect the Arctic islands gpus to update the fiji x2 card? Because that's even more disappointing.
 
So you don't expect the Arctic islands gpus to update the fiji x2 card? Because that's even more disappointing.

HBM2 was last due for volume production in Q3 2016. You can guess how long it takes to stack it get the TSV right and get it ready for market.

Anyone confident that HBM2 production is going to be spot-on-time?

High end Artic Island and Pascal should be designed for HBM2, they were touted awhile ago with JH showing a mockup to the world. With the core design and unique memory controllers to use HBM2, these high-end SKUs are tied in fate with HBM2 for good or bad.

Now smaller die AI or Pascal may indeed come earlier with GDDR5X, but they aren't replacements for current high-end.
 
I dont expect big 14/16nm GPUs from both AMD and NV in the next 12-18 months.


Honestly, I don't think you even need HBM1 bandwidth capacity right now. The Fury GPUs were never even close to stressed. GDDR5X will provide a decent boost.

2016 will likely be "little Pascal"/"little Artic Islands", i.e. GP104 and the AMD equivalent with GDDR5X and GDDR5 for the low end. Considering two node shrinks and new arch, it will still be an out-of-ordinary boost. Closer to 50-60% over last gen rather than the "normal" 25-30% we've been seeing lately.

HBM2 just isn't ready for prime time. Plus, it gives them a convenient excuse to ship something new in 2017, which would be empty if they went full fat with HBM2 in 2016.

It may be possible we'll see GP100 with HBM2 for enterprise and/or a new Titan card for a massive amount of cash in 2016, but the price premium would then be massive, certainly far more than the $999 we've normally see. I wouldn't bet against a $1499 price tag, because some people would still buy that.
 
Honestly, I don't think you even need HBM1 bandwidth capacity right now. The Fury GPUs were never even close to stressed. GDDR5X will provide a decent boost.

2016 will likely be "little Pascal"/"little Artic Islands", i.e. GP104 and the AMD equivalent with GDDR5X and GDDR5 for the low end. Considering two node shrinks and new arch, it will still be an out-of-ordinary boost. Closer to 50-60% over last gen rather than the "normal" 25-30% we've been seeing lately.

HBM2 just isn't ready for prime time. Plus, it gives them a convenient excuse to ship something new in 2017, which would be empty if they went full fat with HBM2 in 2016.

It may be possible we'll see GP100 with HBM2 for enterprise and/or a new Titan card for a massive amount of cash in 2016, but the price premium would then be massive, certainly far more than the $999 we've normally see. I wouldn't bet against a $1499 price tag, because some people would still buy that.

AMD doesnt need GDDR5X, they have a much better product with HBM 1.0.
 
AMD doesnt need GDDR5X, they have a much better product with HBM 1.0.

HBM1 will lose performance wise to GDDR5X. Not to mention the cost issue with HBM, including static cost. If you cant add premium for premium cost, its just another flop.
 
HBM1 will lose performance wise to GDDR5X. Not to mention the cost issue with HBM, including static cost. If you cant add premium for premium cost, its just another flop.

That's if it can run at higher clocks as advertised and on a big bus.

HBM1 as we've all read, is just a stepping stone tech to ease into HBM2 transitions for everyone involved. So its pointless to expect it to keep being around for next-gen.
 
That's if it can run at higher clocks as advertised and on a big bus.

HBM1 as we've all read, is just a stepping stone tech to ease into HBM2 transitions for everyone involved. So its pointless to expect it to keep being around for next-gen.

384bit bus and GDDR5X is plenty to beat 4096bit HBM1. 672GB/sec vs 512GB/sec for example.

HBM needs to drastically get its static cost down. Else it can end up even losing to HMC in the long run.
 
Last edited:
HBM1 will lose performance wise to GDDR5X. Not to mention the cost issue with HBM, including static cost. If you cant add premium for premium cost, its just another flop.

Bandwidth is not the only thing that makes HBM the better product.

Lower power consumption,
Lower heat,
Smaller package,

And using 384bit controller in order for the GDDR5X to reach same bandwidth as HBM 1.0 will increase the GPU die, increase the power consumption and heat etc etc.

HBM also has the lead for the Mobile segment, you dont want 384bit memory bass and GDDR5X for the Laptop market.
 
RussianSensation:

Part of your above quote:
"The other issue with the Fiji X2 is that for 4K it likely won't support HDMI 2.0 and 4GB of VRAM makes it a poor buy long-term. Also, the timing doesn't work since we are near the end of 2015 and 2016 brings 16nm HBM2 cards."
really hit home with me.

I always applaud either side for pushing the envelope so Kudos to AMD for their work in releasing this soon.
However, I sure hope the HDMI 2.0 issue is addressed since of prospective purchaser of this is surely thinking a 4k Monitor. I thought I read that the total memory is 8G?
 
384bit bus and GDDR5X is plenty to beat 4096bit HBM1. 672GB/sec vs 512GB/sec for example.

HBM needs to drastically get its static cost down. Else it can end up even losing to HMC in the long run.

The thing is GDDR5X is unproven, and the memory controller required to run it at such high speeds/bandwidth is completely unproven. So it's merely paper specs.

HBM1 has proven itself capable to deliver on it's claimed paper spec. It's a stepping stone tech to HBM2. That already gives HBM2 more credibility to meet it's paper spec than GDDR5X and certainly HMC, which we shouldn't even bother to speculate at this point.

Intel's propriety ram-tech last time didn't end too well..

Another thing relating to cost of HBM2 vs GDDR5X that needs to be factored. The MC on GDDR5 occupies a large portion of the GPU die, as well as it's TDP budget. HBM2 frees that up, so one could go with a smaller die to save on cost to offset HBM2 module/stacking cost, or one could use that space savings for more performance.

This factor is why I strongly believe next-gen high-end is HBM2 only (they will certainly need it for Teslas and Firepros to out compete Intel's Phi) as such, it's launch will coincide with how well HBM2 shapes up.
 
Back
Top