[VC]AMD Fiji XT spotted at Zauba

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Why are you telling me that?

I answered Actaeon. Did he speak about efficiency? Did we speak about Tonga?

This forums is cursed with endless debates. I'm really tired about that. Can we be all neutral and owe nothing to these companies that constantly bend us over?

I dont see this as a bad thing.
 

KaRLiToS

Golden Member
Jul 30, 2010
1,918
11
81
You can't just do broad-sweeps like comparing GK110 to GM104 (mid-range) and call it on low performance gains moving to the R390X. Does not make sense AT ALL.

Performance is a combination of multiple factors, but ultimately it comes down to (if on the same node):

1. Perf/mm2
2. Perf/w

R390X if as rumored, a 550mm2 chip, that already means without any major architectural improvements, it should be ~20-25% faster than R290X.

They need to be ~50% faster than R290X to be competitive with GM200.

You are right, I stand corrected.


I dont see this as a bad thing.

I do, I or you shouldn't get attacked for my/your purchase or the previous ones.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Anyone actually believe that 780Ti's successor will be $699 considering 980 (mid-range) is $550-600? I think NV will raise the price. Even the the standard 780 was $650 when it came out. I don't think AMD is aiming 390X at $699 price level. We've seen over 10 years that the type of buyers who get flagship cards do not care if it costs $100-300 more for 5-15% more performance. It doesn't matter to them. Even if GM200 is 15% faster, this group of gamers will pay $150-300 over 390X in a heartbeat.

What AMD needs to do is corner the sub-$500 market to make all their desktop offerings in this space better than NV's. AMD stands to gain A LOT more competing in this space and winning mobile design wins than going after GM200 $650+ customers, who also happen to be Extemely NV brand loyal. It's a waste of resources. Look at 290X CF vs. 980 SLI at 1440p/1600p/4K. This same group of "I want the title of the fastest GPUs" will pay 2X more for 5-20% over 290X CF. Even when 290X CF is competitive at 4K, people will still pay double for latest tech. That's just how the market has been forever.

AMD needs to focus and not get caught up in this competing with GM200 hype at the expense of not going after mobile design wins and producing a very competitive sub-$500 desktop lineup. AMD's problem is 970/980 have all the momentum and GM200 is ready just sitting there to be released. That means AMD either has to significantly undercut 970/980 to make up ground, or beat them in performance with a strong GE game bundle.

Unfortunately NV fans will still pay $90-150 more for a similarly performing card. AMD still has the brand issue that ATI didn't have. Even if 390X beats 980 by 15-20% at $550, it could still fail to outsell NV worldwide. In some countries the brand is everything.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Anyone actually believe that 780Ti's successor will be $699 considering 980 (mid-range) is $550-600? I think NV will raise the price. Even the non-780Ti was $650 when it came out. I don't think AMD is aiming 390X at $699 price level.

Depends on 390X performance and price.
 

iiiankiii

Senior member
Apr 4, 2008
759
47
91
R9-390x @ $599 : 20-25% faster than GTX 980. That's my guess.
Nvidia will push out the GTX 980 ti at $699 that will be faster than the 390x by 15%.
R9-390 @ $449 : 5-10% faster the GTX 980.
R9-380x @ $299: 5-10% faster than GTX 970
Main difference between the card? Power consumption.
Again, my guesses.
 
Last edited:
Feb 19, 2009
10,457
10
76
They dont have to outperform GM200 to be competitive. 290x was very competitive against 780Ti.

The R290X in recent games is faster than 780ti and kills it in CF, especially at 4K.

But still, R390X needs to be about 50% faster than R290X to be competitive because of how amazing Maxwell is.

When you look at the die size and TDP of GM204, imagine how fast GM200 would be at 550mm2 and 275W?

@Russian
AMD are not going to win many notebook designs until they match or beat NV on efficiency because in that space, perf/w is king. It wasn't long ago with the 5800 series, AMD had most of the notebook market and now the situation is completely reversed with NV dominating there. You can't compete on mobile when you lack efficiency, period.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
The R290X in recent games is faster than 780ti and kills it in CF, especially at 4K.

But still, R390X needs to be about 50% faster than R290X to be competitive because of how amazing Maxwell is.

When you look at the die size and TDP of GM204, imagine how fast GM200 would be at 550mm2 and 275W?

GM200 will not scale linearly in performance over GM104 because it is designed for GPGPU in mind first, much like GK110 over GK104.
390X will need to be ~20% faster than 980 to be successful and competitive against GM200.

ps. at 1600p or 4K it could be 40-50% faster than Hawaii if above specs are true.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
^ I am in the camp that thinks GM200 780Ti successor (not 780 successor) will be 40-50%+ faster than 980, not 20-30%.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
GM200 will not scale linearly in performance over GM104 because it is designed for GPGPU in mind first, much like GK110 over GK104.
390X will need to be ~20% faster than 980 to be successful and competitive against GM200.

ps. at 1600p or 4K it could be 40-50% faster than Hawaii if above specs are true.

Nonsense. GK110 skaled linearly over GK104, at least up to GTX 780. People make the mistake ignoring that GK104 boosts higher than GK110, but per clock the increase is (almost) perfect:
http://ht4u.net/reviews/2013/nvidia_geforce_gtx_780_ti_gk110_complete_review/index42.php

GTX 780 is 53,3% faster than GTX 680 clock for clock (at 50% more memory bandwidth). Titan and Ti are increasingly limited by memory bandwidth I suppose, so scaling breaks down a bit. If you increase ALL parameters starting from GM204, including bandwidth, it will scale excellent despite GPGPU. Better compute functionality requires more transistors which require additional die space, not even necessarily more power under gaming loads. And even if they were, it is completely irrelevant for architectural scaling in gaming.

The same for 7970 GHz vs 7870 btw - 60% more compute power (4096 GFLOPs vs 2560 GFLOPs) and 59% more performance.
 
Last edited:
Feb 19, 2009
10,457
10
76
Indeed, gk104 to gk110 scaled very well, considering gtx770 vs 780ti or Titan Black.

GM200 with that big a die compared to GM204 is going to be killer fast. AMD better believe that in their heart and soul or they are going to be caught with their pants down.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
Nonsense. GK110 skaled linearly over GK104, at least up to GTX 780. People make the mistake ignoring that GK104 boosts higher than GK110, but per clock the increase is (almost) perfect:

http://ht4u.net/reviews/2013/nvidia_geforce_gtx_780_ti_gk110_complete_review/index42.php

GTX 780 is 53,3% faster than GTX 680 clock for clock (at 50% more memory bandwidth).

In the chart you mention the GTX 780 Ti (966 Mhz base) is 44% faster than GTX 770 (1084 Mhz base) at 2560 x 1440. GTX 780 Ti has 87.5% more shaders than GTX 770. But the actual perf increase is 45 - 50% .

http://ht4u.net/reviews/2013/nvidia_geforce_gtx_780_ti_gk110_complete_review/index42.php

http://www.computerbase.de/2013-11/nvidia-geforce-gtx-780-ti-vs-gtx-titan-test/4/

http://www.hardware.fr/articles/912-22/recapitulatif-performances.html

Even the classified GTX 780 Ti which boosts to 1100 - 1150 Mhz at stock and can be considered on a clock for clock comparison with ref GTX 770 which boosts to 1150 Mhz is just 64% faster (61 * 1.64 = 100.04) than GTX 770 at 1600p and just 51.5% faster than GTX 770 (66 * 1.515 = 99.99) at 1080p. So the claims of perfect scaling are just wrong. Linear scaling maybe at a factor of 0.75x is what we see. ( 87.5 x 0.75 = 65)

http://www.techpowerup.com/reviews/EVGA/GTX_780_Ti_Classified/24.html

Titan and Ti are increasingly limited by memory bandwidth I suppose, so scaling breaks down a bit. If you increase ALL parameters starting from GM204, including bandwidth, it will scale excellent despite GPGPU. Better compute functionality requires more transistors which require additional die space, not even necessarily more power under gaming loads. And even if they were, it is completely irrelevant for architectural scaling in gaming.
GK110 had 5 GPC - each with 3 Kepler SMM ( (192 x 3) x 5 = 2880 ) .
GK104 had 4 GPC - each with 2 Kepler SMM ( (192 x 2) x 4 = 1536 ). The discrepancy is because Nvidia had to pack all the necessary hardware for Quadro / Tesla features and double precision performance . Expect the same for GM200. Since Nvidia already has shown GM107 (GTX 750 Ti) with 1 GPC and 5 maxwell SMM ( (128 x 5 ) x 1 = 640 ), I can forsee a GM200 with 4 GPC, each having 6 Maxwell SMM ( ( 6 x 128) x 4 = 3072).

The same for 7970 GHz vs 7870 btw - 60% more compute power (4096 GFLOPs vs 2560 GFLOPs) and 59% more performance.
The R9 280X is roughly 35% faster than R9 270X at same clocks for a 60% higher shader/flops count.

http://www.techpowerup.com/reviews/MSI/R9_280X_Gaming_6_GB/24.html

Also the perf gain can change from game to game depending on whether the game is shader bound or raster bound or bandwidth bound or texturing perf bounded. You could get close to perfect scaling on average in few rare cases if you perfectly scale all resources. Eg : R9 290 is 2x the HD 7870 . Shaders, ROPs, bandwidth, TMU, Shader engines, geometry engines, raster engines.

http://www.techpowerup.com/reviews/Sapphire/R9_290_Vapor-X/25.html

R 290 Vapor-X (1030 Mhz) is 82% faster than HD 7870 (1000 Mhz) at 1080p. R 290 Vapor-X (1030 Mhz) is 96% faster than HD 7870 (1000 Mhz) at 1600p. (51 x 1.96 = 99.96). With clock normalization you are looking at close to 80% scaling at 1080p and 93 - 94% at 1600p. This only happens when all other resources are perfectly scaled which is very rare and will not happen with GM200. Even then it happens only in high resolutions. GM200 could be 35% faster than GM204 and still it would be mighty impressive. :whiste:
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Indeed, gk104 to gk110 scaled very well, considering gtx770 vs 780ti or Titan Black.

GM200 with that big a die compared to GM204 is going to be killer fast. AMD better believe that in their heart and soul or they are going to be caught with their pants down.

I still don't think you guys are paying attention or acknowledging history.

1. Crazy low prices and ludicrous price/performance with superior performance/watt and Compute/DP features "for free" (without Titan branding) have failed AMD with desktop HD4000/5000/6000.

2. High prices and good performance still failed AMD with HD7000 series vs. 600. In fact we know that 7970Ghz led 680 from June 2012 at cheaper price with better game bundles. Didn't matter.

3. More VRAM and ultra high Rez/multi-monitor gaming failed AMD with 290/290X.

4. Incredible price/performance of dual unlocked 6950s, overclocked dual 7950s, 290s also failed AMD.

AMD cannot and will not win against NV using any of the old strategies on the desktop. Even if 390X beats GM200 by 20%, it will not win. NV will sell more. People bought 770 for $100 more over 280X, 680 4GB for $100 more over 7970Ghz, 780 for $100-200 more over 290.

Look at AMD's desktop market share and look at reality, AMD has the entire sub-$300 GPU market on the desktop ALL locked and this market is 90% of ALL desktop GPU sales in the industry:

AMD has a better gaming desktop GPU at EVERY price point under $330:
http://www.techspot.com/guides/912-best-graphics-cards-2014/

Do we see AMD command 70-90% of desktop market? Nope. Hasn't happened in 10 years!

All of this points to deep perception and brand value damage, similar to Cadillac and Hyundai 10 years ago.

AMD needs to focus on GE game performance/features and push Mantle across as many GCN products as possible, focus on getting mobile design wins, focus on strategic wins with high end manufacurers like Apple, Alienware, Maingear, Origin, etc. to improve their brand image, focus on making OpenCL a better alternative to CUDA for professionals, etc.

Wasting resources to try and beat GM200 is a total waste of $ for a company strapped on cash and anchored by debt. 7970Ghz, 6990, 295X2 beat 680/590/Titan Z. This did little for AMD's desktop market share overall.

AMD needs to outsmart NV by providing seamless CF support in the most popular games, and providing smoothest frame times for min. frames on single GPUs with Mantle. Additionally, AMD needs to execute better on unique features such as TrueAudio and FreeSync monitors, as well as DP1.3 in 300 series.

But I am afraid this is not enough. When the average PC gamer sees NV blowing AMD away in popular games like Unity, they are too nervous to buy AMD in fear of another popular game running 2-3X faster on NV due to GW. What I don't want at all because I support open, not closed/proprietary features, but what I think AMD must do, is use these same dirty proprietary and performance destroying tactics NV has been using for years -- provide locked optimized code to game developers, and optimize as many AAA games to Mantle/DirectCompute. We are no longer in a fair battle of 7900GTX vs. X1950XTX where raw performance rules modern games, now it's about who throws more resources at game developers. AMD needs to do that on the desktop above all other strategies.

If 390X uses 20nm and water cooling to beat the 980, you can bet your marbles NV supporters will focus on performance/watt and the fact that AMD is so behind that they needed 20nm and water to keep up.

AMD also needs to figure out some way for their GPUs to perform faster when paired with Zen than with an Intel CPU. They need to figure out a way to provide Asynchornous CF with Zen and any GCN part as long as the generations align. For exemple, being able to Hybrid CF a 768 SP GCN 3.0 Zen with any GCN 3.0 GPU. Then, it will not matter if Intel CPUs are faster in games since with Zen you will get a free "GCN GPU" which could overcome any advantage of an Intel CPU in games. These strategies would challenge the Intel+NV dominance in the eyes of the average gamer.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
^ I am in the camp that thinks GM200 780Ti successor (not 780 successor) will be 40-50%+ faster than 980, not 20-30%.

I didnt say GM200 will be 20-30% faster than 980, i said 390X needs to be ~20% faster than 980 to be competitive(price/performance) against GM200.

;)
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Nonsense. GK110 skaled linearly over GK104, at least up to GTX 780. People make the mistake ignoring that GK104 boosts higher than GK110, but per clock the increase is (almost) perfect:
http://ht4u.net/reviews/2013/nvidia_geforce_gtx_780_ti_gk110_complete_review/index42.php

GTX 780 is 53,3% faster than GTX 680 clock for clock (at 50% more memory bandwidth). Titan and Ti are increasingly limited by memory bandwidth I suppose, so scaling breaks down a bit. If you increase ALL parameters starting from GM204, including bandwidth, it will scale excellent despite GPGPU. Better compute functionality requires more transistors which require additional die space, not even necessarily more power under gaming loads. And even if they were, it is completely irrelevant for architectural scaling in gaming.

The same for 7970 GHz vs 7870 btw - 60% more compute power (4096 GFLOPs vs 2560 GFLOPs) and 59% more performance.

According to your own link,

GTX-780Ti at default 928MHz boost is only 37% faster at 1080p and 40% at 1440p than GTX-770.
GTX-780Ti has 87,5% more shaders, 87,5% more Texture Units, 50% more ROPs and 50% more memory Bandwidth than GTX 770.

I believe we may see almost the same with GM200 vs GTX-980.
So if 390X is 20% faster than 980 it will be fine at $499-$550.

;)
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I didnt say GM200 will be 20-30% faster than 980, i said 390X needs to be ~20% faster than 980 to be competitive(price/performance) against GM200.

;)

This is only going to work for brand agnostic price/performance buyers who bought 4870/5850/6950/7950/290. Price/performance strategy failed AMD many times on the desktop. Now what if you could CF a 370X with an APU. Someone on a budget who can only spend $130 on a CPU but wants 380 level of performance could get close by pairing an 512-768 GCN 2.0 APU with a 370X. This is a strategy neither Intel nor NV can compete with. It's a totally unique value add for both AMD CPUs and GPUs. Think about how many people purchase $120-190 i3/i5s and $200-250 NV GPU. Now take the additive performance of GCN in the APU and a $200-250 AMD GPU (which happens to be faster than NV due to price/performance anyway).

This combination will allow the 2 AMD components in tandem to punch WAY above their power class in GPU limited games. NV and Intel will be scrambling to respond to that aggregate GPU power. Unfortunately, AMD has never been able to get Hybrid CF with a low-end and mid-range/flagship GPU to work properly. All of a sudden the 15-20% GPU advantage of GM200 will be wiped out by the GPU power in the APU. As memory technology improves, AMD needs to create a Unified shared memory pool and allow any APU and GCN to CF. AMD would price the APU cheap but make up the lost APU profits with added sales of AMD motherboards and GPU sales.

If this is too complex, AMD needs to allow CF for any GCN GPU combination (390 with 370X). This would also increase sales.
 
Last edited:

tential

Diamond Member
May 13, 2008
7,348
642
121
This is only going to work for brand agnostic price/performance buyers who bought 4870/5850/6950/7950/290. Price/performance strategy failed AMD many times on the desktop. Now what if you could CF a 370X with an APU. Someone on a budget who can only spend $130 on a CPU but wants 380 level of performance could get close by pairing an 512-768 GCN 2.0 APU with a 370X. This is a strategy neither Intel nor NV can compete with. It's a totally unique value add for both AMD CPUs and GPUs. Think about how many people purchase $120-190 i3/i5s and $200-250 NV GPU. Now take the additive performance of GCN in the APU and a $200-250 AMD GPU (which happens to be faster than NV due to price/performance anyway).

This combination will allow the 2 AMD components in tandem to punch WAY above their power class in GPU limited games. NV and Intel will be scrambling to respond to that aggregate GPU power. Unfortunately, AMD has never been able to get Hybrid CF with a low-end and mid-range/flagship GPU to work properly. All of a sudden the 15-20% GPU advantage of GM200 will be wiped out by the GPU power in the APU. As memory technology improves, AMD needs to create a Unified shared memory pool and allow any APU and GCN to CF. AMD would price the APU cheap but make up the lost APU profits with added sales of AMD motherboards and GPU sales.

If this is too complex, AMD needs to allow CF for any GCN GPU combination (390 with 370X). This would also increase sales.

Is my memory completely shot or wasn't this a touted feature of crossfire at the VERY beginning of hearing about it? I remember reading how we were supposed to be able to use two different cards together, and then after that, we never heard of it again and it was two of the same card. Wish I even begin to knew where to link this info.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
But wouldn't you be just trading off where the fps comes from? The performance you gain by using onboard APU in tandem with a 370x is traded away in the first place by poorer cpu performance.
It also might help to see actual real world performance gains by pairing a 370x with a 512-768 shader APU.
Edit: Just adding that I know that all scenarios could have a surprisingly different outcome.
 
Last edited:

dacostafilipe

Senior member
Oct 10, 2013
810
315
136
Is my memory completely shot or wasn't this a touted feature of crossfire at the VERY beginning of hearing about it? I remember reading how we were supposed to be able to use two different cards together, and then after that, we never heard of it again and it was two of the same card. Wish I even begin to knew where to link this info.

Does not work great with AFR. Maybe with SFR, if work can be split asymmetrically across the GPUs. Until then its kinda pointless ...

Or, when you will be able to use the second GPU (iGPU?) for stuff like compute. Who knows ...
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Does not work great with AFR. Maybe with SFR, if work can be split asymmetrically across the GPUs. Until then its kinda pointless ...

Or, when you will be able to use the second GPU (iGPU?) for stuff like compute. Who knows ...

poorer performance as stated by keys(already casting doubt on zen and keller huh) but with sfr could improve smoothness...maybe?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Right now while SLI and CF seem complex, they are actually very primitive. The AFR approach more or less only works well when both of the GPUs are similar in power. Think about a multi-core CPU. Some games can use 3 cores well but the 4th core adds only 10-20% more performance. However with proper game coding, the 4th core acts on its own really as an additive resource. Instead of splitting 50% of the load on 1 GPU and 50% on the other as in CF/SLI AFR, imagine if the game viewed 4096 SP GPU and an extra 768 SP GPU as a common pool of workers. If you get this to work, it doesn't matter if you pair a 390X with a 380X. AMD and NV launched SLI/CF a long time ago and then they stopped innovating.

Another idea is to create a graphics card with a swappable ASIC. Right now we drop a CPU into a socket on a motherboard. Think about the graphics card as a morherboard with memory and ASIC as the heart. Before this was unthinkable since memory bandwidth was always limiting. With HBM we will see exponential gains in memory bandwidth, the greatest leap in a decade in how bandwidth works. Maybe there is a way to reuse the GPU board for 1-2 generations. After all 970 / 980 can be made on exact 760/670 PCB!

While my swappable ASIC idea is probably not reasonable, the idea of combining GPUs of differing power in SLI and CF has been set aside. I realize it is probably a lot more difficult to work on then AFR but sometimes it takes hard work and time to move forward. If AMD or NV get this right, it will be a big differentiating factor between them.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
In the chart you mention the GTX 780 Ti (966 Mhz base) is 44% faster than GTX 770 (1084 Mhz base) at 2560 x 1440. GTX 780 Ti has 87.5% more shaders than GTX 770. But the actual perf increase is 45 - 50% .

http://ht4u.net/reviews/2013/nvidia_geforce_gtx_780_ti_gk110_complete_review/index42.php

http://www.computerbase.de/2013-11/nvidia-geforce-gtx-780-ti-vs-gtx-titan-test/4/

http://www.hardware.fr/articles/912-22/recapitulatif-performances.html

Even the classified GTX 780 Ti which boosts to 1100 - 1150 Mhz at stock and can be considered on a clock for clock comparison with ref GTX 770 which boosts to 1150 Mhz is just 64% faster (61 * 1.64 = 100.04) than GTX 770 at 1600p and just 51.5% faster than GTX 770 (66 * 1.515 = 99.99) at 1080p. So the claims of perfect scaling are just wrong. Linear scaling maybe at a factor of 0.75x is what we see. ( 87.5 x 0.75 = 65)

http://www.techpowerup.com/reviews/EVGA/GTX_780_Ti_Classified/24.html

GK110 had 5 GPC - each with 3 Kepler SMM ( (192 x 3) x 5 = 2880 ) .
GK104 had 4 GPC - each with 2 Kepler SMM ( (192 x 2) x 4 = 1536 ). The discrepancy is because Nvidia had to pack all the necessary hardware for Quadro / Tesla features and double precision performance . Expect the same for GM200. Since Nvidia already has shown GM107 (GTX 750 Ti) with 1 GPC and 5 maxwell SMM ( (128 x 5 ) x 1 = 640 ), I can forsee a GM200 with 4 GPC, each having 6 Maxwell SMM ( ( 6 x 128) x 4 = 3072).

The R9 280X is roughly 35% faster than R9 270X at same clocks for a 60% higher shader/flops count.

http://www.techpowerup.com/reviews/MSI/R9_280X_Gaming_6_GB/24.html

Also the perf gain can change from game to game depending on whether the game is shader bound or raster bound or bandwidth bound or texturing perf bounded. You could get close to perfect scaling on average in few rare cases if you perfectly scale all resources. Eg : R9 290 is 2x the HD 7870 . Shaders, ROPs, bandwidth, TMU, Shader engines, geometry engines, raster engines.

http://www.techpowerup.com/reviews/Sapphire/R9_290_Vapor-X/25.html

R 290 Vapor-X (1030 Mhz) is 82% faster than HD 7870 (1000 Mhz) at 1080p. R 290 Vapor-X (1030 Mhz) is 96% faster than HD 7870 (1000 Mhz) at 1600p. (51 x 1.96 = 99.96). With clock normalization you are looking at close to 80% scaling at 1080p and 93 - 94% at 1600p. This only happens when all other resources are perfectly scaled which is very rare and will not happen with GM200. Even then it happens only in high resolutions. GM200 could be 35% faster than GM204 and still it would be mighty impressive. :whiste:

You clearly didn't read my post. I said "up to the GTX 780". Everything above scales worse due to bandwidth beginning to be a limiting factor. All other reviews besides HT4U are useless since they don't benchmark with fixed clocks, thus the influence of clock speed is uncertain and can shift results depending on cooling/boost.

And why do you ignore my example of 7870 vs 7970 GHz? Afaik Tahiti has 2 raster engines, just like Pitcairn. Who is to say that those don't have an impact on scaling with the rest of the function blocks? The results are inconclusive here.

Why will it not happen, how could you possibly know? Did you design GM200? It is absolutely possible that GM200 increases ROPs, TMUs, SP and bandwidth by x%? Why would it be unlikely that performance would increase by those x% too - be it 30, 40 or 50%?
Only if for example GM200 were to double the shader count (while retaining similar GPU clocks) but increase bandwidth by only 50%, it would be quite certain that scaling would be non-linear.

Fact is: Linear scaling can and does happen as evidenced by my examples. Another example would be GTX 680 vs GTX 650 Ti.
 
Last edited:

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Right now while SLI and CF seem complex, they are actually very primitive. The AFR approach more or less only works well when both of the GPUs are similar in power. Think about a multi-core CPU. Some games can use 3 cores well but the 4th core adds only 10-20% more performance. However with proper game coding, the 4th core acts on its own really as an additive resource. Instead of splitting 50% of the load on 1 GPU and 50% on the other as in CF/SLI AFR, imagine if the game viewed 4096 SP GPU and an extra 768 SP GPU as a common pool of workers. If you get this to work, it doesn't matter if you pair a 390X with a 380X. AMD and NV launched SLI/CF a long time ago and then they stopped innovating.

Another idea is to create a graphics card with a swappable ASIC. Right now we drop a CPU into a socket on a motherboard. Think about the graphics card as a morherboard with memory and ASIC as the heart. Before this was unthinkable since memory bandwidth was always limitinhg. Whith HBM we will see exponential gains in memory bandwidth, the greatest leap in a decade in how bandwidth works. Maybe there is a way to reuse the GPU board for 1-2 generations. After all 970 / 980 can be made on exact 760/670 PCB!

piggybacking off your brainstorming what if gfx cards were also gaming cards meaning that they perform the same even with a low end cpu becuase they their own cpus on-die. with something like that nvidia wouldnt be left to depend on an amd or intel cpu. it also greatly simplifies upgrades and game spec requirements.

basically an apu on a graphics card, muahahha:biggrin:
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
A cpu integrated into the GPU die has been talked about for a while now. Some even expected maxwell to contain acpu. Maybe big Maxwell or perhaps Pascal, R390/490.
 
Last edited:

beginner99

Diamond Member
Jun 2, 2009
5,320
1,768
136
This is only going to work for brand agnostic price/performance buyers who bought 4870/5850/6950/7950/290. Price/performance strategy failed AMD many times on the desktop. Now what if you could CF a 370X with an APU. Someone on a budget who can only spend $130 on a CPU but wants 380 level of performance could get close by pairing an 512-768 GCN 2.0 APU with a 370X. This is a strategy neither Intel nor NV can compete with. It's a totally unique value add for both AMD CPUs and GPUs. Think about how many people purchase $120-190 i3/i5s and $200-250 NV GPU. Now take the additive performance of GCN in the APU and a $200-250 AMD GPU (which happens to be faster than NV due to price/performance anyway).

This combination will allow the 2 AMD components in tandem to punch WAY above their power class in GPU limited games. NV and Intel will be scrambling to respond to that aggregate GPU power. Unfortunately, AMD has never been able to get Hybrid CF with a low-end and mid-range/flagship GPU to work properly. All of a sudden the 15-20% GPU advantage of GM200 will be wiped out by the GPU power in the APU. As memory technology improves, AMD needs to create a Unified shared memory pool and allow any APU and GCN to CF. AMD would price the APU cheap but make up the lost APU profits with added sales of AMD motherboards and GPU sales.

If this is too complex, AMD needs to allow CF for any GCN GPU combination (390 with 370X). This would also increase sales.

Would be great but then currently Intel CPUs have like double the IPC so going with an AMD APU will just not be very feasible especially when looking at games like BF4 an Crysis 3.

But yeah, the main point I agree is that the relatively powerful GPU in an APU (compared to Intel) must add much more value than it does now. Overpowered (too expensive) for 2D/ office work, underpowered for gaming.