[BitsAndChips]390X ready for launch - AMD ironing out drivers - Computex launch

Page 16 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

therealnickdanger

Senior member
Oct 26, 2005
987
2
0
As long as we're all speculating, what if AMD has discovered a way to bridge multiple GPUs in a way that is transparent to the game? What is this actually is a dual GPU, but operates as seamlessly as one? That would certainly be a game changer.
 

dacostafilipe

Senior member
Oct 10, 2013
804
305
136
As long as we're all speculating, what if AMD has discovered a way to bridge multiple GPUs in a way that is transparent to the game? What is this actually is a dual GPU, but operates as seamlessly as one? That would certainly be a game changer.

That's exactly what I was speculating about.

It sounds complicated, and a nightmare to manage on the cache level/memory, but who knows.

PS: Maybe this is not the right thread to discuss this?
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
As long as we're all speculating, what if AMD has discovered a way to bridge multiple GPUs in a way that is transparent to the game? What is this actually is a dual GPU, but operates as seamlessly as one? That would certainly be a game changer.

The die pictured in the previous page would be a horrible way of using space.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
As long as we're all speculating, what if AMD has discovered a way to bridge multiple GPUs in a way that is transparent to the game? What is this actually is a dual GPU, but operates as seamlessly as one? That would certainly be a game changer.

The fact that the slide at an official conference states Controllers and not GPUs almost entirely destroys the dual-GPU theory. That would be grossly misleading to the viewers. Secondly, the slide says AMD and SKHynix developed a "special prototype." That means this could be just another way to package HBM but might not be what is actually implemented on an R9 390X.
 
Last edited:

StereoPixel

Member
Oct 6, 2013
107
0
71
If you look at the image there is no GPU(s) shown at all. The "Dual GPU?" has been Photoshopped onto the image. The actual slide just shows the controllers and the DRAM. It's FUD.

LOL. I just added a question onto the image. This is not a statement, it's just my question. It's my quess.
Controller = GPU, because GPU has intergrated memory controllers.
I look at the img there is giant dual GPU 1 package prototype, if it is true we will see it as 1 GPU with Hi-Speed HyperTransport Link 4.0.
 
Dec 30, 2004
12,553
2
76
The higher-ups at AMD seem to have been making a lot of miscalculations. I don't know how many more the company can afford. I'm starting to get the feeling that AMD has pretty much nothing of substance to offer until 2016 - assuming they survive long enough to reach the promised land of 16nm FinFET.



That excuse doesn't work for most of AMD's lineup. Of all the GCN chips, only Tahiti and Hawaii had substantial DP performance (1/4 SP for Tahiti, 1/2 SP for Hawaii, though the latter was limited to 1/8 on Radeons). Cape Verde, Pitcairn, Bonaire, and Tonga all have very limited DP performance (1/16 SP). The AMD cards do all work better with OpenCL, but that's a driver optimization issue, not a hardware limitation. Maxwell actually increased compute performance relative to Kepler on some integer workloads (Scrypt mining).

I think something went very wrong with Tonga - it was supposed to compete with Maxwell on perf/watt, but it couldn't. Maybe it was initially designed for GloFo's 28nm SHP process, but that didn't work out for some reason or other and it got ported to TSMC instead. The fact that no fully enabled Tonga has been released to the consumer AIB market is really odd - surely yields on a mature process like 28nm can't be so poor that the high-end Retina iMac is sucking up all of the non-defective chips. Even if AMD wanted to hold off on it on consumer cards because of a large back stock of Tahiti (sunk cost fallacy), that doesn't explain why they used a castrated Tonga for FirePro W7100 instead of the full chip.

sometimes I wonder if RussianSensation is a paid bot-contributor just to generate comments to keep ATF alive
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
A dual chip for a 390x seems so far fetched to me. There is no evidence of this, just something Fuad dreamed up when AMD said their vr presentations was possible only with a dual configuration of their unreleased GPU. I guess somehow he thought that AMD couldn't have come up with 2 preproduction fiji gpus in CF and somehow dreamed up that Fiji is two chips on one PCB.

It is the last thing I would have concluded with that information.

I think of it like this,
Fiji is the big daddy to tonga. The chip is full tonga doubled...........but the better way to look at it is reverse. Tonga is half of Fiji, just as the gm204 is half of a gm200. I am pretty sure of this. There is no reason to think Fiji is a dual tonga chip, it is more like the big daddy like the gm200 is maxwells big daddy.

I am really leaning this way. But that means there really isn't a full brand new line up coming. The fact that tonga launched with disabled parts may give a glimpse that there have issues. It may be a result of trying to bring these chips to 28nm. If producing tonga was problematic, Fiji would have to be that much more complex.

The more I have thought about it, the more it makes sense. A full tonga is half of Fiji. These are the chips AMD is working with. Of course, there could be other improvements with the extra time that AMD has had since tonga. But designing entirely new chips takes far too long. I believe tonga/Fiji was the path forward but AMD was banking on the node which turned out to be a major set back for them.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
sometimes I wonder if RussianSensation is a paid bot-contributor just to generate comments to keep ATF alive

:thumbsup: :D

I think the state of the CPUs for gaming today (i.e., Core i7 920 @ 4.0-4.4Ghz or Core i5 2500K / 2600K @ 4.5-4.8Ghz) mean that the excitement in the CPU section of the forum is all but dead. That used to be one of my favourite sub-forums. Naturally as CPU upgrades were exciting, so was the research for a motherboard and after-market CPU cooling. These 2 upgrades went hand-in-hand. Today a "bare-bones" X99 board like the Asrock X99 Extreme 4 is packed with so many features and high-end components that you now have to do research for the purposes of specific extra features you want beyond the fundamental basis that's already so good. In the past, older boards had horrible BIOSes, budget parts, poor overclocking/instability. Those days are largely behind us. Skylake delays and X99 BW-E delay to Q1 2016, which means Skylake-E is even farther now, aren't making the situation better.

For CPU cooling, Corsair/NZXT AIO CLC or CM212+/Thermalright/Noctua fill in that space. Sure there are occasional super deals like Zalman CNPS14x for $10 but since CPU's power usage hasn't really increased much from Core i7 920 OC days, any solid high-end cooler such as Thermalright True Spirit 140/ Silver Arrow or Noctua NH-D14 can keep on trucking for many generations.

With SSDs, more or less Crucial and Samsung 850 Evo dominate the price/performance categories while SanDisk and Samsung 850 Pro dominate the high-end. On the PCIe SSD space, we have Intel and Samsung more or less. On the PSU space, anything basically from SeaSonic, Corsair, Antec, LEPA/Enermax, Rosewill or EVGA is rock solid for the most part.

Essentially the high quality products and bang-for-the-buck products are so obvious today that it basically requires very minimal research for PC enthusiasts who have been building for 10-20 years+.

That leaves custom water-cooling loops, GPU upgrades and monitor upgrades as the last 3 exciting areas left imho. Custom water-loops are very niche while 4K monitors are hardly affordable for the masses and we have a serious problem with FreeSync/GSync similar to HD-DVD vs. BluRay which is holding back many people (in addition to lack of high quality IPS panels and larger sized panels in this space as of now).

Getting back to GPUs, delays of huge games like GTA V, the Witcher 3, Batman AK, Project CARS, the Division and horribly optimized games at launch with sub-par graphics like Watch Dogs and AC Unity have all weighed down at the excitement of GPU upgrades. BF Hardline seems like a meh game overall, while GTX970/980 hardly raised the performance bar from now old R9 290/290X series. That leaves us with the $1K Titan X as the most exciting GPU out today. It's no wonder this is a pretty uneventful time in PC gaming for experienced builders. It probably would be a lot different if we had a lot of truly next generation PC games like Star Citizen and TW3 out today and if 4K monitors were offered in many varieties/sizes and price levels. That would have forced a lot more GPU upgrades.

Also, the 28nm node is heavily weighing down on this generation. It makes it too difficult to ignore that 16nm FinFET+ and 14nm are not that far away and are going to be a gargantuan leap.

"The foundry [TSMC] said its 16FF+ process will deliver a 10% performance uplift than competing nodes, while at the same time consuming 50% less power than its current 20nm node." ~ Source
^ That's against a 20nm node, imagine against a 28nm one?!

With HBM/HBM2 and 14nm/16nm, and reference AIO CLC, it's not out of the question that AMD/NV could in theory get GPUs 75-100% faster than Titan X out of the next generation if they could manage to build 550-600mm2 successors. A lot of gamers are now thinking that the longer we wait for R9 390X and the competitor's equivalent, the closer we are to 14nm/16nm generation. That makes this generation one of the least exciting to me in a long time. (I am assuming 14nm/16nm GPUs actually deliver but if all we get from September 2016 to September 2017 are mid-range next gen parts...ahem....then that would be seriously disappointing).

Also, I think a lot of people are anxious to see what Windows 10 and DX12 games do to change the PC gaming landscape but again we likely won't start seeing the fruits of that until 2016 and even 2017.

Just my 2 cents.

I am really leaning this way. But that means there really isn't a full brand new line up coming.

I think what you meant is "a full new architecture" instead of a new line-up. AMD can easily improve R9 290/290X performance by 5-10% and drop power usage. At the same time, there are bound to be 2-3 SKUs based on R9 390X as AMD will end up with non-full die yielding 4096 SP chips. Anyway, everyone knew a long time ago that R9 390 series was not a new architecture (aka not post GCN) but a continuous improvement of GCN. Most likely it could be called GCN 1.3 and use the foundation of Tonga and improve even beyond that in terms of perf/watt due to a more mature 28nm node. However, it would be totally wrong to equate the dramatic architectural moves from Fermi -> Kepler -> Maxwell to GCN 1.0-> 1.1 -> 1.2 -> 1.3(?). Part of that is because GCN was always built to be modular and a strong compute architecture from the ground-up. It was never meant to be a 2-3 year architecture only. When Eric Demers unveiled GCN, it became obvious AMD would use this for at least 5 years from his presentation. HD7970 was unveiled on Dec 22, 2011 and GCN is going to be the foundation for R9 300 series which means by end of this year the architecture will turn 4 years old.
 
Last edited:

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
:thumbsup: :D

I think the state of the CPUs for gaming today (i.e., Core i7 920 @ 4.0-4.4Ghz or Core i5 2500K / 2600K @ 4.5-4.8Ghz) mean that the excitement in the CPU section of the forum is all but dead. That used to be one of my favourite sub-forums. Naturally as CPU upgrades were exciting, so was the research for a motherboard and after-market CPU cooling. These 2 upgrades went hand-in-hand. Today a "bare-bones" X99 board like the Asrock X99 Extreme 4 is packed with so many features and high-end components that you now have to do research for the purposes of specific extra features you want beyond the fundamental basis that's already so good. In the past, older boards had horrible BIOSes, budget parts, poor overclocking/instability. Those days are largely behind us. Skylake delays and X99 BW-E delay to Q1 2016, which means Skylake-E is even farther now, aren't making the situation better.

For CPU cooling, Corsair/NZXT AIO CLC or CM212+/Thermalright/Noctua fill in that space. Sure there are occasional super deals like Zalman CNPS14x for $10 but since CPU's power usage hasn't really increased much from Core i7 920 OC days, any solid high-end cooler such as Thermalright True Spirit 140/ Silver Arrow or Noctua NH-D14 can keep on trucking for many generations.

With SSDs, more or less Crucial and Samsung 850 Evo dominate the price/performance categories while SanDisk and Samsung 850 Pro dominate the high-end. On the PCIe SSD space, we have Intel and Samsung more or less. On the PSU space, anything basically from SeaSonic, Corsair, Antec, LEPA/Enermax, Rosewill or EVGA is rock solid for the most part.

Essentially the high quality products and bang-for-the-buck products are so obvious today that it basically requires very minimal research for PC enthusiasts who have been building for 10-20 years+.

That leaves custom water-cooling loops, GPU upgrades and monitor upgrades as the last 3 exciting areas left imho. Custom water-loops are very niche while 4K monitors are hardly affordable for the masses and we have a serious problem with FreeSync/GSync similar to HD-DVD vs. BluRay which is holding back many people (in addition to lack of high quality IPS panels and larger sized panels in this space as of now).

Getting back to GPUs, delays of huge games like GTA V, the Witcher 3, Batman AK, Project CARS, the Division and horribly optimized games at launch with sub-part graphics like Watch Dogs and AC Unity have all weighed down at the excitement of GPU upgrades. BF Hardline seems like a meh game overall, while GTX970/980 hardly raised the performance bar from now old R9 290/290X series. That leaves us with the $1K Titan X as the most exciting GPU out today. It's no wonder this is a pretty uneventful time in PC gaming for experienced builders. It probably would be a lot different if we had a lot of truly next generation PC games like Star Citizen and TW3 out today and if 4K monitors were offered in many varieties/sizes and price levels. That would have forced a lot more GPU upgrades.

Also, the 28nm node is heavily weighing down on this generation. It makes it too difficult to ignore that 16nm FinFET+ and 14nm are not that far away and are going to be a gargantuan leap.

"The foundry [TSMC] said its 16FF+ process will deliver a 10% performance uplift than competing nodes, while at the same time consuming 50% less power than its current 20nm node." ~ Source
^ That's against a 20nm node, imagine against a 28nm one?!

With HBM/HBM2 and 14nm/16nm, and reference AIO CLC, it's not out of the question that AMD/NV could in theory get GPUs 75-100% faster than Titan X out of the next generation if they could manage to build 550-600mm2 successors. A lot of gamers are now thinking that the longer we wait for R9 390X and the competitor's equivalent, the closer we are to 14nm/16nm generation. That makes this generation one of the least exciting to me in a long time. (I am assuming 14nm/16nm GPUs actually deliver but if all we get from September 2016 to September 2017 are mid-range next gen parts...ahem....then that would be seriously disappointing).

Also, I think a lot of people are anxious to see what Windows 10 and DX12 games do to change the PC gaming landscape but again we likely won't start seeing the fruits of that until 2016 and even 2017.

Just my 2 cents.



I think what you meant is "a full new architecture" instead of a new line-up. AMD can easily improve R9 290/290X performance by 5-10% and drop power usage. At the same time, there are bound to be 2-3 SKUs based on R9 390X as AMD will end up with non-full die yielding 4096 SP chips. Anyway, everyone knew a long time ago that R9 390 series was not a new architecture (aka not post GCN) but a continuous improvement of GCN. Most likely it could be called GCN 1.3 and use the foundation of Tonga and improve even beyond that in terms of perf/watt due to a more mature 28nm node. However, it would be totally wrong to equate the dramatic architectural moves from Fermi -> Kepler -> Maxwell to GCN 1.0-> 1.1 -> 1.2 -> 1.3(?). Part of that is because GCN was always built to be modular and a strong compute architecture from the ground-up. It was never meant to be a 2-3 year architecture only. When Eric Demers unveiled GCN, it became obvious AMD would use this for at least 5 years from his presentation. HD7970 was unveiled on Dec 22, 2011 and GCN is going to be the foundation for R9 300 series which means by end of this year the architecture will turn 4 years old.

Great post.

With CPU competition done (and most 'good' CPUs from the past 3-4 years still great performers in games) the only 'real advancement has been with SSDs and GPUs. Most builders have a relatively modern-gen SSD now (think 840Pro or newer) and likely ~256GB or bigger, that leaves GPUs. And with 28nm bogging us down for so long, it only stokes the fires a little bit within 'us' enthusiasts.

That's why I build a custom water build last year, just as something 'new' to do that I never got around to previously. When MBs are getting refreshed solely for USB 3.1, you know people are bored. :p

Here is to the next GPU gen. Lets hope we see the gains you talk about...:)
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
I think what you meant is "a full new architecture" instead of a new line-up. AMD can easily improve R9 290/290X performance by 5-10% and drop power usage. At the same time, there are bound to be 2-3 SKUs based on R9 390X as AMD will end up with non-full die yielding 4096 SP chips. Anyway, everyone knew a long time ago that R9 390 series was not a new architecture (aka not post GCN) but a continuous improvement of GCN. Most likely it could be called GCN 1.3 and use the foundation of Tonga and improve even beyond that in terms of perf/watt due to a more mature 28nm node. However, it would be totally wrong to equate the dramatic architectural moves from Fermi -> Kepler -> Maxwell to GCN 1.0-> 1.1 -> 1.2 -> 1.3(?). Part of that is because GCN was always built to be modular and a strong compute architecture from the ground-up. It was never meant to be a 2-3 year architecture only. When Eric Demers unveiled GCN, it became obvious AMD would use this for at least 5 years from his presentation. HD7970 was unveiled on Dec 22, 2011 and GCN is going to be the foundation for R9 300 series which means by end of this year the architecture will turn 4 years old.

that is exactly what i meant.

I would also like to comment on this

"The foundry [TSMC] said its 16FF+ process will deliver a 10% performance uplift than competing nodes, while at the same time consuming 50% less power than its current 20nm node." ~ Source
^ That's against a 20nm node, imagine against a 28nm one?!
Unless you are referring to low power ARM chips, we might as well ignore these claims. All the great stuff 20nm offered over 28nm, it was completely meaningless in GPUs.

As a matter of fact, if you read on in your link

He also mentioned that new Cortex-A72 designs on 16FF+ will offer a 3.5x performance increase over Cortex-A15 parts (presumably on 28nm silicon), while at the same time consuming 75% less power than the A15.
They are specifically talking about the low power ARM chips.

I would love to think or know that 16nm will be bring massive improvements to us in GPUs. But looking around, the situation has but a scare in me. Seeing Intel's 22nm do so little in the higher MHZ was one thing. But then seeing TSMC 20nm HP just slip away because it is essentially worthless.......as bad as those things looked, we now see Intel's complete blunder with broadwell!!!!
It is not broadwell, it is the node. Broadwell is a die shrink of haswell and it could have gone any worse. It is DOA for the top end. This is a huge scare for me. Cause i know intel really believed things would work out. They believed until the last minute and now here we are. Node shrinks are not working out well for chips in the top end, for the high performance chips. But the terrible part is that there is no plan B. That even intel believed things would work out and kept marching forward.

TSMC fought hard with 20nm HP and got nowhere. Now everyone is so sure that 16nm HP will be just fine. Just like intel believed in 14nm.

The thing is, intel will have success as 14nm allows them advancements in the ultra low power spectrum. Just as TSMC will ship billions of 20nm LP chips. But to me, i am not liking what I see.

Everyone thinks 16nm will bring us forward in GPUs...........I really really hope so
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
But to me, i am not liking what I see.

Every single full node jump in GPUs gave us 50-100% increase in GPU performance when coupled with a brand new architecture. This time we have HBM/HBM 2.0 as well! Why would you think that going from 28nm-> 14nm/16nm+ HBM would not give us 50-100% increase in GPU performance at similar power levels? I think you are too conservative. The last time NV had a full node jump + new architecture, 780Ti was 2X faster than a 580. AMD and XX will produce the largest single generational jump in memory bandwidth in its history going from 320-336GB/sec to 650-700GB/sec+. Already this generation AMD will be moving to 1-1.25Ghz HBM1 and for 14nm GPUs, HBM 2.0 will get even faster!
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
clearly you haven't overclocked a FX-8xxx to 4.5ghz and beyond!!!

hehe. A $40 Thermalright True Spirit 140 is more than up to the task. I would have never bought an FX-8000/9000 series CPU for gaming to start with though since they lose in 95%+ of games to an overclocked i5 2500K, nevermind IVB & Haswell. Prices of the top air coolers have come down over the years since AIO CLC became more popular. As a result, you can now buy a top 3 air cooler that even beats most AIO CLCs for just $60. Chances are such a CPU cooler would easily last 5 years and survive 1-2 CPU upgrades if you wanted to.

That's what I meant that the excitement to research the best cooling for CPUs now outside of custom water loops is just not there anymore. We are no longer experiencing these dramatic leaps in cooling performance increases. Once we reached a level of Thermalright Silver Arrow/Noctua NH-D14 in 2011, it's been 2-4C improvement at best for the best air coolers like NH-D15 and the Phanteks PH-TC14PE.

yep. 32" 1440p @ 85hz will be my next upgrade.

What kind of a monitor is that? What about LG's 34" 3440x1440 models?
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
Every single full node jump in GPUs gave us 50-100% increase in GPU performance when coupled with a brand new architecture. This time we have HBM/HBM 2.0 as well! Why would you think that going from 28nm-> 14nm/16nm+ HBM would not give us 50-100% increase in GPU performance at similar power levels? I think you are too conservative. The last time NV had a full node jump + new architecture, 780Ti was 2X faster than a 580. AMD and XX will produce the largest single generational jump in memory bandwidth in its history going from 320-336GB/sec to 650-700GB/sec+. Already this generation AMD will be moving to 1-1.25Ghz HBM1 and for 14nm GPUs, HBM 2.0 will get even faster!

I know how node shrinks went before but here we are completely skipping 20nm. Has that ever happened before?

I dont know how things will go. But looking at intels 22nm then 14nm, i am not confident for high performance GPUs on 16nm. I am really thinking it might not be what everyone thinks.

I really hope i am wrong. trust me, it is just this sick gut feeling. Not anything i want to happen. It is just this terrible 'what if' i used to think and now it makes me sick cause it started to stick in my head
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
I know how node shrinks went before but here we are completely skipping 20nm. Has that ever happened before?

I dont know how things will go. But looking at intels 22nm then 14nm, i am not confident for high performance GPUs on 16nm. I am really thinking it might not be what everyone thinks.

I really hope i am wrong. trust me, it is just this sick gut feeling. Not anything i want to happen. It is just this terrible 'what if' i used to think and now it makes me sick cause it started to stick in my head

The original review for the Radeon 7970 mentions that TSMC skipped their planned 32nm node.

That said, I understand and share your concerns. The biggest problem is that low-wattage portable devices are now sucking up all the foundries' time and attention, to the point that traditional full-strength CPUs and GPUs are being starved out. TSMC cares about Apple much more than they care about Nvidia. Even GloFo has customers other than AMD, and no real incentive to care what AMD thinks, because thanks to Hector Ruiz, AMD has to buy their crap anyway. It would be a catastrophe if 28nm was the last enthusiast node.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,801
1,528
136
I was looking at them and the actual size is 32.7" I don't know how they round it to 34" and not 33" I am way less likely to purchase a $900 monitor that is 32.7"

If that's true that's terrible, especially considering that measuring ultra wide aspects diagonally is already borderline misleading.
 
Dec 30, 2004
12,553
2
76
hehe. A $40 Thermalright True Spirit 140 is more than up to the task. I would have never bought an FX-8000/9000 series CPU for gaming to start with though since they lose in 95%+ of games to an overclocked i5 2500K, nevermind IVB & Haswell. Prices of the top air coolers have come down over the years since AIO CLC became more popular. As a result, you can now buy a top 3 air cooler that even beats most AIO CLCs for just $60. Chances are such a CPU cooler would easily last 5 years and survive 1-2 CPU upgrades if you wanted to.

That's what I meant that the excitement to research the best cooling for CPUs now outside of custom water loops is just not there anymore. We are no longer experiencing these dramatic leaps in cooling performance increases. Once we reached a level of Thermalright Silver Arrow/Noctua NH-D14 in 2011, it's been 2-4C improvement at best for the best air coolers like NH-D15 and the Phanteks PH-TC14PE.



What kind of a monitor is that? What about LG's 34" 3440x1440 models?

well, the price worked out ($80 for the FX-8310) and it let me carry over my Thermalright Ultra-120. I would have made the same decision up to $115.

People talk about my cooler being 'out of date' but I have trouble understanding how basic thermodynamics becomes out of date
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Also, the 28nm node is heavily weighing down on this generation. It makes it too difficult to ignore that 16nm FinFET+ and 14nm are not that far away and are going to be a gargantuan leap.

"The foundry [TSMC] said its 16FF+ process will deliver a 10% performance uplift than competing nodes, while at the same time consuming 50% less power than its current 20nm node." ~ Source
^ That's against a 20nm node, imagine against a 28nm one?!

With HBM/HBM2 and 14nm/16nm, and reference AIO CLC, it's not out of the question that AMD/NV could in theory get GPUs 75-100% faster than Titan X out of the next generation if they could manage to build 550-600mm2 successors. A lot of gamers are now thinking that the longer we wait for R9 390X and the competitor's equivalent, the closer we are to 14nm/16nm generation. That makes this generation one of the least exciting to me in a long time. (I am assuming 14nm/16nm GPUs actually deliver but if all we get from September 2016 to September 2017 are mid-range next gen parts...ahem....then that would be seriously disappointing).

Given the yield struggles that even the mighty Intel is having with 14nm FINFET I am pretty sure no GPU vendor will even think of a > 500 sq mm 14/16nm FINFET die in 2016. Hell even the GK110 Tesla launched only by late 2012 , roughly a year from start of 28nm production. the GTX Titan launched 11 months after GTX 680 launch. I expect the first 14/16nm FINFET GPUs to be in the 300 - 350 sq mm range. These will launch by Q2 2016 (more towards Computex). Apple, Qualcomm and Samsung will make sure that not much 14/16nm wafers are available till Q1 2016. We are unlikely to see big die 14/16nm GPUs before Q2 2017. So somebody who buys a R9 390X will most probably end up waiting 2 years to get a 75 - 100% jump. Of course thats an enormous leap. But thats something we are used to with a full process node change + architectural change. btw I expect AMD to come up with a grounds up new post GCN architecture by H2 2017.
 
Last edited:

dacostafilipe

Senior member
Oct 10, 2013
804
305
136
XFX Radeon R9 390 Double Dissipation smiles for camera

XFX-R9-390-DD-2.jpg


XFX-R9-390-DD-1.jpg


Source:http://videocardz.com/55358/xfx-radeon-r9-390-double-dissipation-smiles-for-camera