Dual Fiji Card May Finally Be Here Soon

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
That PCB size though.

How long until cooling jokes start in this thread?
 

thesmokingman

Platinum Member
May 6, 2010
2,302
231
106
That PCB size though.

How long until cooling jokes start in this thread?

That pcb is smaller than most single gpu pcb.


I wonder if the rumors of a dual GM200 card in the works has AMD bumping up the original release schedule.


Considering they use that card in that micro box, they have already had one made nvm the pic is of an actual card. I'm thinking it's the other way around, that Nvidia has been pushed to produce a dual gpu card.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
Not sure why AMD is wasting their time with a card like this. For a company spiraling as quickly as they are into financial oblivion, you would think they would focus what little R&D money they have on producing more appealing mainstream products that can actually generate profitable revenue. If you're Intel or Nvidia, pulling in record profits every quarter, you can afford halo products like these to keep the engineers entertained. AMD doesn't have the money or the time to be wasting on such exorbitantly expensive niche products.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
8,207
9,545
136
Not sure why AMD is wasting their time with a card like this. For a company spiraling as quickly as they are into financial oblivion, you would think they would focus what little R&D money they have on producing more appealing mainstream products that can actually generate profitable revenue. If you're Intel or Nvidia, pulling in record profits every quarter, you can afford halo products like these to keep the engineers entertained. AMD doesn't have the money or the time to be wasting on such exorbitantly expensive niche products.

I think the major issue with the Fiji parts is, ironically, the HBM memory which was supposed to be their killer design feature. It's too expensive at the moment to work down into $300 and less parts but doesnt really offer any tangible benefit for the high end stuff.

AMD's only option at this point is to work it into halo parts with the hope of some kind of margin.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Not sure why AMD is wasting their time with a card like this. For a company spiraling as quickly as they are into financial oblivion, you would think they would focus what little R&D money they have on producing more appealing mainstream products that can actually generate profitable revenue. If you're Intel or Nvidia, pulling in record profits every quarter, you can afford halo products like these to keep the engineers entertained. AMD doesn't have the money or the time to be wasting on such exorbitantly expensive niche products.

You might be overestimating the cost of doing this. It's not a new GPU package. AMDs R&D budget is not little, its currently less than competition but not little.
 

NTMBK

Lifer
Nov 14, 2011
10,434
5,778
136
Not sure why AMD is wasting their time with a card like this. For a company spiraling as quickly as they are into financial oblivion, you would think they would focus what little R&D money they have on producing more appealing mainstream products that can actually generate profitable revenue. If you're Intel or Nvidia, pulling in record profits every quarter, you can afford halo products like these to keep the engineers entertained. AMD doesn't have the money or the time to be wasting on such exorbitantly expensive niche products.

They've already designed and manufactured the GPU, the interposer and the HBM (or rather Hynix has). Putting two of them on one GPU won't take much R&D, by comparison.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
They've already designed and manufactured the GPU, the interposer and the HBM (or rather Hynix has). Putting two of them on one GPU won't take much R&D, by comparison.

I think that the better way of saying this is that it doesn't take up enough R&D to make this that it would make a significant difference in other projects if they were to have chosen not to. Scraping plans for this early likely would have not affected anything in a notable way.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
They've already designed and manufactured the GPU, the interposer and the HBM (or rather Hynix has). Putting two of them on one GPU won't take much R&D, by comparison.

AMD's current dual GPU card, 295X2, was released at $1500. Because of the catastrophic failure that was the Titan Z, AMD owned the dual GPU market this generation. Despite that, less than a year later it was selling for $600 after $30 rebate:

http://forums.anandtech.com/showthread.php?t=2422332&highlight=295x2

And as you can see, no one cared. 60% off a card that is still the fastest single card in some games today, and no one cared.

I don't care what the rumors will be for the next generation cards, if a 980Ti can be had for $260 next March, the internet will explode.

It doesn't matter how little you think this cost AMD and will cost them in support, they wasted money developing it and should focus their efforts on main stream products.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
But basically all of fiji is about advertising isn't it? Hence the tiny volumes and very targeted niche stuff like the Nano. A dual GPU design would very comfortably fit that pattern.

No point critiquing this when you have to suspect it was more or less the only option they had left. They can't really do anything more worthwhile than the 3xx stuff in main stream until they get the die shrink.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
I think that the better way of saying this is that it doesn't take up enough R&D to make this that it would make a significant difference in other projects if they were to have chosen not to. Scraping plans for this early likely would have not affected anything in a notable way.

It may not have cost them a lot of money comparatively speaking, but what might it have cost them in time? AMD has not been a very punctual company with releases in recent years. The engineers stuck on this waste of time, may have helped getting their other products out earlier.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
AMD's current dual GPU card, 295X2, was released at $1500. Because of the catastrophic failure that was the Titan Z, AMD owned the dual GPU market this generation. Despite that, less than a year later it was selling for $600 after $30 rebate:

http://forums.anandtech.com/showthread.php?t=2422332&highlight=295x2

And as you can see, no one cared. 60% off a card that is still the fastest single card in some games today, and no one cared.

I don't care what the rumors will be for the next generation cards, if a 980Ti can be had for $260 next March, the internet will explode.

It doesn't matter how little you think this cost AMD and will cost them in support, they wasted money developing it and should focus their efforts on main stream products.

It may not have cost them a lot of money comparatively speaking, but what might it have cost them in time? AMD has not been a very punctual company with releases in recent years. The engineers stuck on this waste of time, may have helped getting their other products out earlier.

So, you realistically believe that the 295X2 ultimately either caused the 300 series to be delayed or prevented them from making a new GCN 1.2 chip in place of Hawaii or Pitcairn?
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
So, you realistically believe that the 295X2 ultimately either caused the 300 series to be delayed or prevented them from making a new GCN 1.2 chip in place of Hawaii or Pitcairn?

I wasn't implying any of that. I was showing what a failure the 295X2 was despite having no competition and then questioning why AMD would do it again expecting a different result this time.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
But basically all of fiji is about advertising isn't it? Hence the tiny volumes and very targeted niche stuff like the Nano. A dual GPU design would very comfortably fit that pattern.

No point critiquing this when you have to suspect it was more or less the only option they had left. They can't really do anything more worthwhile than the 3xx stuff in main stream until they get the die shrink.

Exactly. They have already manufactured Fiji and put it inside Fury X and Nano, while the non-100% yielding die are going into the Fury, some of which can even unlock to the full version/partial unlock.

Since the Fury X is a premium product and already has a niche customer base, might as well try to make profit by selling 2 of those chips. Perhaps they can surprise and launch it at $1199. One gripe with AMD's and NV's dual-chip cards has been that they cost more than buying 2 separate flagships. This was true for a lot of the time with HD7990 and then R9 295X2. If Fiji X2 is $1499, then it's once again will be a very niche product since if you can fit 2 flagship cards in the case, might as well get Fury X CF or 980Ti SLI.

But the biggest issue I have right now with PC gaming is most PC games are either unoptimized turds OR they are weakling console ports with crappy/non-demanding graphics.

Looking at TPU's review of Nano CF even at 1440P, for those of us with 60 fps 1440P monitors, look what happens:

1. Alien Isolation = 190 fps
2. AC Unity = no CF scaling at the time of testing
3. Batman AO = almost 200 fps
4. BF3 = 178 fps
5. BF4 = 109 fps
6. Bioshock Infinite = 209 fps
7. COD: AW = 141 fps
8. Civ: BE = 130 fps
9. Crysis 3 = 65 fps
10. Dead Rising 3 = no CF scaling at the time of testing
11. DAI = 79 fps (single Fury X got 45)
12. FC4 = no CF scaling at the time of testing
13. GTA V = 89 fps (but take a closer look, Fury X is at 54.7, 980Ti is at 62)
14. Metro LL = 102 fps
15. Project CARS = 65 fps (but a single Fury X is at 61.8)
16. Ryse Son of Rome = 118 fps
17. Shadow of Mordor = 140 fps
18. The Witcher 3 = 74.9 fps (but 980Ti is at 55.7)
19. Tomb Raider = 84 fps (a single Fury X is at 52.7)
20. Watch Dogs = 96 fps
21. Wolfenstein = no CF scaling but a single Fury X is over 60
22. WOW = negative CF scaling but a single Fury X is at 126 fps

In conclusion, unless one is a SSAA/VSR junky (must have it in every game), buying a dual Fury X card for a 60 fps 1440P only provided a real world benefit in 2/22 games. You can add 3 more games with GTA, The Witcher 3 and Tomb Raider but let's assess those. In GTA V, The Witcher 3 and Tomb Raider, it would be better to just buy a single after-market GTX980Ti and overclock it to 1.4-1.5Ghz, and maybe turn down 1-2 shadow or so settings because of how close a 980Ti OC would come to 60 fps mark in those 3 titles. That means for most gamers a single after-market 980Ti, or even a Fury X is more than enough for 1440P. Should they desire more performance in 2016-2017, well just sell that card and get a 16nm HBM2. But I guess the customer who buys Fury X2 just wants the fastest card possible in the smallest from factor. Fair enough.

A dual-Fiji card (or GM200) literally begs for a 144-165Hz 1440P monitor or all that extra performance is simply wasted on modern PC games out as of now. The only way to use that extra performance is if you are a truly competitive PC gamer who requires the highest FPS on a 144Hz monitor but in that case a lot of them turn off the graphical effects to have a better view of the environment.

This quick overview of Nano CF also shows just how pathetic the state of PC gaming graphics is in right now. I sure hope we get some next generation PC games because once GTX980Ti/Fury X level of performance drops down to $300-400 mid-range Pascal/AMD's cards in 2016, what is the point of anything faster?

To illustrate this point further, GPUs faster than the Titan X are running into HUGE CPU bottlenecks even at 1440P.

Here we see R9 295X2 beating TX by 6.5%, while Nano CF is beating TX by 19.5%. Nano CF only scales 47% from Nano:
perfrel_2560.gif


Moving from a CPU-limited to a more GPU-intensive resolution (Yes, it sounds crazy that we are discussing being CPU-limited at 1440P but that's exactly the case with 2013-2015 PC games and this much GPU power):

We now see R9 295X2 beating TX by 20% (triple the gap at 1440P!), while Nano CF beats TX by 37.8% (nearly double the gap from 1440P). Nano CF scaling now grows to 64% against the Nano:
perfrel_3840.gif


In other words, the biggest issue for a Fiji X2 card is actually its use case scenario for most gamers - it's way too fast for 60 fps 1440P and below, other than the cases I described earlier. But is SSAA/VSR worth $1200-1500 for most people? Doubtful.

A solid after-market 980Ti + OC would destroy almost all PC games out right now, outside of very few use cases and horrible optimized turds like ARK Survival or AC Unity. I doubt at this point anyone is going to drop $1300 just to max out the AC Unity turd.

acu_2560_1440.gif


Overall an after-market 980TI ~ R9 295X2 which I'd wager is sufficient for 95% of PC gamers at 1440P 60Hz and below.
perfrel_2560.gif


That begs the question, what's exactly the use case for Fiji X2? 4K? 144-165Hz 1440P? Ok, but at that point might as well get dual after-market 980Tis and OC them since they'll win at 4K/144Hz 1440P.

The future-proofing argument can't be made either since 4GB HBM1 is going to look mid-range at best in 2016.

AMD should have focused its efforts on 370X, fully unlocked R9 380X Tonga, and binning 3584 SP Fury into a sub-150W laptop offering to compete with the GTX980 in laptops.
 
Last edited:
Feb 19, 2009
10,457
10
76
@RussianSensation
I think vram usage has peaked, at least for awhile. Games are now developed with console specs as the driver and these consoles will remain the target for several more years at least. For the PC, we're seeing the optional 4K textures and 4GB GPUs handle SoM, Ryse, Evolve just fine, even at 4K resolutions.

The only setting that pushes vram beyond 4GB at this point is 4K + 4x MSAA or supersampling 4K+, at which point the GPU grunt has run out long before vram.

There's a place for dual-GPU cards, as long as its cheaper than 2 separate GPUs, ie. $1200 or less for Fury X2. If its priced more than 2 separate cards, the use scenario shrinks to be really niche.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
I don't see much point to a dual chip Fiji card. AMD has enough trouble fully utilizing Fiji as it is, I wouldn't expect the scaling with a dual chip to be anything but atrocious. And any use situation where all 8192 stream processors would actually be useful is probably going to end up memory limited by that 4 GB of memory.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
In other words, the biggest issue for a Fiji X2 card is actually its use case scenario for most gamers - it's way too fast for 60 fps 1440P and below, other than the cases I described earlier. But is SSAA/VSR worth $1200-1500 for most people? Doubtful.

So it's for people with 4K screens? That's obvious. There are a lot of people sinking $1000+ into 4K monitors right now on other forums. I just am not interested in sinking that much money into a card before a node shrink. I'll get the Arctic Islands variant. The games I have on my list will last me long enough that a Dual Fiji card would be of no benefit.

The card should fit everything I want, and be able to fit in a MiniITX case easily with the smaller PCBs. So yes, I really do want these dual cards to come out now that they can be even smaller.
 
Last edited:
Feb 19, 2009
10,457
10
76
@tential
It's always been the case for multi-GPU.

A few years ago, people on the cutting edge of monitors, 1600p or 1440p 120hz, they needed CF or SLI to really drive it well.

These days its 4K. A single GPU just doesn't cut it.

You could say why not get 2 separate GPUs and yes, that's exactly the problem with these dual-GPU cards. I can see there's one usage scenario where its more advantageous, and that's QuadFire, even more niche.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
@tential
It's always been the case for multi-GPU.

A few years ago, people on the cutting edge of monitors, 1600p or 1440p 120hz, they needed CF or SLI to really drive it well.

These days its 4K. A single GPU just doesn't cut it.

You could say why not get 2 separate GPUs and yes, that's exactly the problem with these dual-GPU cards. I can see there's one usage scenario where its more advantageous, and that's QuadFire, even more niche.

Because it's 1 card, 1 slot?
It's WC?
It's simply cool?
I mean, it'll be the smallest, fastest, single slot solution out there?

I mean.... do I need to say more? That's what I want. I'm just waiting for Arctic Islands.

But well, I'm biased as I am obsessed with 4K/VSR if AMD would let me use more resolutions like Nvidia, I'd try to go beyond 4K VSR.

Edit: I would be VERY excited to see this card hit $650 at one point during its lifecycle, yet still be the fastest card, similar to the R9 295x2. It would have a LOT less negatives about it.
 
Feb 19, 2009
10,457
10
76
Well, it'll make for a killer mITX build. Rad exhausting heat out the case, it won't be uber long so it should fit in most of the nicer cases too.

As said, pretty niche. Hopefully not priced at stupid levels.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
Because it's 1 card, 1 slot?
It's WC?
It's simply cool?
I mean, it'll be the smallest, fastest, single slot solution out there?

I mean.... do I need to say more? That's what I want. I'm just waiting for Arctic Islands.

But well, I'm biased as I am obsessed with 4K/VSR if AMD would let me use more resolutions like Nvidia, I'd try to go beyond 4K VSR.

Edit: I would be VERY excited to see this card hit $650 at one point during its lifecycle, yet still be the fastest card, similar to the R9 295x2. It would have a LOT less negatives about it.

The 295X2 is 1 card, 1 slot, watercooled, indeterminately cool, and in many scenarios the fastest single slot solution out there. It was $600, had no competition from Nvidia, and no one gave a s***. Assuming AMD doesn't release this new card at $500, why is this going to be any different than the 295x2?
 
Feb 19, 2009
10,457
10
76
The 295X2 is 1 card, 1 slot, watercooled, indeterminately cool, and in many scenarios the fastest single slot solution out there. It was $600, had no competition from Nvidia, and no one gave a s***. Assuming AMD doesn't release this new card at $500, why is this going to be any different than the 295x2?

Because by the time it was $600, good custom R290s were ~$220 and R290X were below $300.

Dual-GPU really only appeal under two situations: 1) It costs less than 2x GPU separately and 2) Niche builds.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
Because by the time it was $600, good custom R290s were ~$220 and R290X were below $300.

Dual-GPU really only appeal under two situations: 1) It costs less than 2x GPU separately and 2) Niche builds.

beyond that though, the r9 295x2 eats up a lot of power....
The whole appeal of the next Dual Cards is that power consumption, and card size BOTH will drop.
Of course it's a niche market, it's a niche card. This is like trying to make a case for Titan Z or something for the average consumer.

Dual GPU cards make more sense than ever before given HBM.

AMD can make a card that has no competition. I don't see why they wouldn't.