[Rumor - WCCFTech] AMD Arctic Islands 400 Series Set To Launch In Summer of 2016

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Rumor

"AMD Arctic Islands 400 Series Set To Launch In Summer of 2016 – Features 2X The Performance Per Watt, 14nm / 16nm FinFET And HBM2

There hasn’t been any reliable information about Arctic Islands’ release timeframe, that is until today. We have confirmed that the company is planning to introduce its next generation 14nm / 16nm family of graphics cards throughout the summer of 2016 and into the back to school season. The launch is still too far out to make out any specific details and we have yet to confirm what the new series will actually be called, so we’re going to use “400 series” as placeholder until then.

Read more: http://wccftech.com/amd-1416nm-arctic-islands-launching-summer-2016/#ixzz3tIlGJp84

Arctic Islands will feature three major improvements over AMD’s current current R 300 series and Fury series line-up. These include second generation HBM, next generation “Arctic Islands” GCN architecture as well as a more advanced 14/16nm FinFET manufacturing process. The three of which mean that for the first time since 2012 we’re going to get a truly next generation family of GPUs, with performance and power efficiency levels that vastly exceed what we see today from the 28nm 300 series and Fury series."


Read more: http://wccftech.com/amd-1416nm-arctic-islands-launching-summer-2016/#ixzz3tIl2znjO

If true, other than the Fiji X2, it may be that AMD has no 14nm/16nm GPUs until June 2016 at the earliest and given AMD's shaky 'hard' launch in the last couple of generations, this could mean at least another quarter until ample supply of cards shows up in stores worldwide -- that explains the comment about the back to school.
 
Last edited:

atticus14

Member
Apr 11, 2010
174
1
81
I think the back to school comment refers to their CEO talking about how they plan on introducing basically all new cards from top to bottom which will take place over several quarters. I guess the most fun question is, which come first?

"Joe Moore – Morgan Stanley – Analyst
You have had a nice sequential quarter, but I still have your GPU business down quite a lot year over year. Now that you have products that are more competitive in the enthusiast segment, can you give us an upper bound of
what you might be able to achieve there? Is there supply constraints that are keeping this small and are you going to be able to regain the levels that you were at a year ago in GPU?
Lisa Su – Advanced Micro Devices, Inc. – President, CEO
Yes, so Joe, I think one quarter is good progress. Now you will have to watch us over a number of quarters regain that graphics momentum.
And when I think about it, relative to the Fury launch we did have some supply constraints in the third quarter. They were — they are largely solved in the fourth quarter, so I don’t think there will be any supply constraints.
I think it’s also fair to say that the graphics portfolio is quite broad, and so you will see us updating the entire portfolio over the coming quarters
 

Actaeon

Diamond Member
Dec 28, 2000
8,657
20
76
2x performance per watt, so about what Maxwell does today? :)

Just joking, AI will surely be a beast and this is a step in the right direction. I don't care much about the performance per watt and IMO an overstated 'metric' by the anti-AMD crowd. But even as an AMD fan there is no denying the performance per watt on Hawaii wasn't very good.

With what we're seeing with HBM and HBM2, I have to wonder if memory bandwidth will be a non-issue for a while. Maybe to the point where overclocking memory won't give you any real world benefit since the bandwidth isn't being saturated. Just crank up the core and go.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I don't care much about the performance per watt and IMO an overstated 'metric' by the anti-AMD crowd. But even as an AMD fan there is no denying the performance per watt on Hawaii wasn't very good.

It is, especially when it's used in the useless context of the Engineer's perf/watt, not the Gamer's/end user's perf's watt (i.e., total system perf/watt)

Power_01.png

Power_02.png

Power_03.png


Inside a modern i5/i7 rig, the differences in power usage are so minor, anyone with a solid 520-600W PSU (Corsair, EVGA, PC Power & Cooling, Enermax, SeaSonic) shouldn't care that much. As I keep saying, it's often used a marketing tactic to get people to upgrade or as veil to justify high price premiums for newer architectures without delivering on the key price/performance metric.

With what we're seeing with HBM and HBM2, I have to wonder if memory bandwidth will be a non-issue for a while. Maybe to the point where overclocking memory won't give you any real world benefit since the bandwidth isn't being saturated. Just crank up the core and go.

True, even on my HD6950 or HD7970, memory overclocking hardly matters. I think memory overclocking matters a lot more for GPUs that are starved by memory bandwidth. With the move to HBM2/GDDR5X, AI should have very little to gain from overclocking the memory.

Thing thing is AMD has gotten the temps well under control but Fiji isn't a good overclocker at all.

Temps.png


Perhaps AMD's higher transistor density and actual architecture is what hampers their newer R9 290/Fury series from overclocking as well.

Unfortunately for AMD, even if their card is competitive in stock form, many enthusiasts do overclock and who wants to leave 20-25% on the table? That's one area where I feel AI will continue to have problems.

perfrel_3840_2160.png
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
2x performance per watt, so about what Maxwell does today? :)

Just joking, AI will surely be a beast and this is a step in the right direction. I don't care much about the performance per watt and IMO an overstated 'metric' by the anti-AMD crowd. But even as an AMD fan there is no denying the performance per watt on Hawaii wasn't very good.

With what we're seeing with HBM and HBM2, I have to wonder if memory bandwidth will be a non-issue for a while. Maybe to the point where overclocking memory won't give you any real world benefit since the bandwidth isn't being saturated. Just crank up the core and go.

It's usually overstated, but not always, when it comes to desktop GPU's when comparing prices, performance, and OC headroom. I.E. efficiency alone is not a good enough reason, IMO, to spend a noticeably higher amount of money for the same performance. On notebooks, though, efficiency is paramount.

I think perf/w, perf/mm2, and perf/transistor are barometers more indicative to architecture strengths (or weaknesses). AMD lost considerable ground at 28nm and is currently losing on all fronts which makes the task to stay competitive at 14/16nm tougher. If AMD loses more ground in those metrics with next gen GPUs, then we'll end up seeing either a larger disparity in perf/w, outright performance, or probably both.

Didn't Raja also say only two new GPU chips would come out next year?
 
Last edited:

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
No mention of GDDR5X in the article? Omission or deliberate? Have a hard time seeing the entire 400-series being HBM2 given the yield issues.

If we assume back to school as the time frame, that's early'ish september. So what, basically 10 months? Even if we take late July/early august, basically the same timeframe as Fiji, that's still around 9 months.

This is all assuming that AMD has learned its lesson from Hawaii/Fiji and understood that it can't launch with almost no supply and crappy reference coolers. If AMD still hasn't learned this lesson then you have to tack on 2-3 months before the supply comes into balance and you get decent after-market coolers. So that's almost a year from now.

Maybe we should revise our "14 nm GPUs are out any moment now" meme we've been spreading lately. Glad I upgraded to a GTX 980. Seems it'll last me at least 1 and a half years at this pace. That's almost a full upgrade cycle(counting 2 years as the start point for mainstream upgrading cycle). Who knew.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
I dont think anyone is saying 14nm GPUs are any moment now. I see people saying 2016, and other people assuming 2016 means January 2016 when it more likely will be August to October 2016, with March - June 2016 being the absolute earliest credible estimate I've yet seen.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Didn't Raja also say only two new GPU chips would come out next year?

Ya, he did. 2 GPU chips can make a lot of SKUs.

This generation.

Fiji = Fury, Nano, Fury X, Fiji X2 (4 graphics cards)
Hawaii = R9 390/390X (2 GPUs)
That's 6 graphics cards covering $280-$1K+ segments

What about Pitcairn and Tahiti generation?
HD7990
HD7970Ghz
HD7970
HD7950 V2
HD7950
HD7870 XT
HD7870
HD7850

That's 8 graphics cards covering $250-$999 price segments from just 2 chips!

Look at Fury, Nano, Fury X at 1440P. If AMD shrinks those, and drops prices, we can have Fury X performance for $399 with say 75W less power. Cards like the Nano/Fury shrunk would make their way into laptops.

Will it be enough to compete with Pascal? Probably not but I don't expect AMD to win considering AMD is fighting a two-front war has less resources than either of its competitors.

Bad news for consumers though if AMD is way behind Pascal since it means no pricing pressure on NV.

No mention of GDDR5X in the article? Omission or deliberate? Have a hard time seeing the entire 400-series being HBM2 given the yield issues.

What about shrinking Fiji chips? AMD can make 3 SKUs alone from Fury/Nano/Fury X with HBM1. Why would AMD want GDDD5X which requires an all-new memory controller redesign?

If we assume back to school as the time frame, that's early'ish september. So what, basically 10 months? Even if we take late July/early august, basically the same timeframe as Fiji, that's still around 9 months.

Ya, so hopefully the performance gain is decent, not 20-25% over Fury X for $650.

This is all assuming that AMD has learned its lesson from Hawaii/Fiji and understood that it can't launch with almost no supply and crappy reference coolers.

Fury X's reference cooler is among the best. It's cool and quiet. It likely prompted many of AIBs to create AIO CLC versions of some popular higher-end cards.

Also, the entire Fury, 390, 380, 380X line-up have excellent after-market. Sapphire Fury Tri-X is quieter at max load than a reference GTX980Ti is at idle.


That means AMD already learned its heatsink lesson this generation.

Maybe we should revise our "14 nm GPUs are out any moment now" meme we've been spreading lately.

I don't think many were saying this.

Glad I upgraded to a GTX 980.

Oh, now it starts to make sense why you get so upset when people call those $550 cards mid-range. :D Personally I would have went with 970 SLI instead.

Seems it'll last me at least 1 and a half years at this pace. That's almost a full upgrade cycle(counting 2 years as the start point for mainstream upgrading cycle). Who knew.

This has more to do with the consolized nature of many AAA PC games, rather than how good your card is, or a card with similar performance compared to yours.Even if next gen's cards are 50-100% faster, if 2016 PC games stay at similar graphics tech level as 2015 games (some of which don't even look as good as Crysis Warhead, Metro 2033/LL, etc.), it's just yet another meh GPU generation to me. No wonder graphics unit volume sales are rapidly declining over the last 5 years.

To put it into perspective, when Crysis 1 came out, it wiped the floor with multiple generations of flagship cards. If we had a true next gen PC game launch in 2016 with graphics that wouldn't be tocuhed by any other game for 3-4 years, it would mean 30 fps at 1080P on a flagship Volta. 3 years after Crysis came out, GTX480 could barely crack 30 fps at 1080P.

crysis_1920_1200.gif


It's no wonder modern GPUs last so long since we don't have next gen PC games. Also, Crysis 1 looked at least 1 generation ahead of any game on consoles at the time. What game on the PC today looks 1 generation ahead of SW:BF on consoles? None. Ironically SW:BF is probably the best looking PC game of all time now and it runs like butter on lower ends cards like R9 280X. That means the GPU hardware has gotten too far ahead of software and most companies are 3-4 years behind. I can't even imagine buying a card 50-100% faster than Fury X for 1080P in 2016. What's the point?

This guy has created a 10 min video of 20 most-anticipated PC games of 2016 and I do not see anything in there besides possibly 1-2 games that could give your card much trouble. 2016 is shaping up to be much the same like 2015 with most games still not moving beyond the hardware capabilities of PS4. Even The Division has been so dumbed down from its E3 footage, it's a mere shadow of the former self.

I dont think anyone is saying 14nm GPUs are any moment now. I see people saying 2016, and other people assuming 2016 means January 2016 when it more likely will be August to October 2016, with March - June 2016 being the absolute earliest credible estimate I've yet seen.

This. I think most of us were thinking Q2-3 2016. I don't recall our forum predicting line-ups of 14nm/16nm GPUs in January-March 2016. We thought maybe cards for HPC /GPGPU market or lower-end ones for OEMs.
 
Last edited:

gamervivek

Senior member
Jan 17, 2011
490
53
91
AI should improve the clockspeed gap that AMD have at the moment or expect Pascal to falter with added hardware. There's no way AMD can compete with ~50% higher clocks on nvidia's chips and they then also lose in power efficiency by having to squeeze out the last bit of performance from their chips.

btw power usage comparisons seem to differ by non-trivial amounts from manufacturers, MSI's 390X was notorious for guzzling power while Techspot used HIS cards. And the non-ref nvidia cards also end up using quite a bit more power than what the stock cards would do.

Of course, who releases first would be quite important as well. I do think AMD will be quicker to the new node but whether they can get the big chips, which have become a necessity due to the clockspeed gap, out earlier as well.
 
Feb 19, 2009
10,457
10
76
Moving forward it only makes sense to produce 2 chips, a mid-range and high-end.

The mid-range can be harvested to a low-midrange SKU such as ~960/380 performance, all the way to R290/970 performance.

Low end will be obsolete as Intel pushes their iGPU further and AMD's Zen + GCN APU blaze the crappy entry dGPU bracket.
 

desprado

Golden Member
Jul 16, 2013
1,645
0
0
So there is no architecture improvement from AMD. They are just relaying on Node for the efficiency whereas Nvidia Pascal will be 2X more efficient from Maxwell which is called architecture improvement and node will give them a + advantage over it.

It was bound to happen when AMD is really short of money, investors, customer and employees. AMD R&D has suffered.
 

garagisti

Senior member
Aug 7, 2007
592
7
81
So there is no architecture improvement from AMD. They are just relaying on Node for the efficiency whereas Nvidia Pascal will be 2X more efficient from Maxwell which is called architecture improvement and node will give them a + advantage over it.

It was bound to happen when AMD is really short of money, investors, customer and employees. AMD R&D has suffered.
That's quite a desperate reach mate... afaik, there are architetural improvements. you have little more than a presentation from nvidia and no Pascal or AI chippery to be as cock sure and dismissive of a product from either company.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
So there is no architecture improvement from AMD. They are just relaying on Node for the efficiency whereas Nvidia Pascal will be 2X more efficient from Maxwell which is called architecture improvement and node will give them a + advantage over it.

It was bound to happen when AMD is really short of money, investors, customer and employees. AMD R&D has suffered.

These include second generation HBM, next generation “Arctic Islands” GCN architecture

Reading the OP is usually a good place to start.


Edit: Side note, I didn't look far, but the last 5 days of ebay listings show higher prices on the R9 290 being sold at than before. All above $200 used. Maybe there is more interest now in the AMD R9 290/390 cards for the $/perf ratio? Was going to try to pick up a second card for below $200 but looks like t hat' snot happening easily.
 
Last edited:

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
It's usually overstated, but not always, when it comes to desktop GPU's when comparing prices, performance, and OC headroom. I.E. efficiency alone is not a good enough reason, IMO, to spend a noticeably higher amount of money for the same performance. On notebooks, though, efficiency is paramount.

I think perf/w, perf/mm2, and perf/transistor are barometers more indicative to architecture strengths (or weaknesses). AMD lost considerable ground at 28nm and is currently losing on all fronts which makes the task to stay competitive at 14/16nm tougher. If AMD loses more ground in those metrics with next gen GPUs, then we'll end up seeing either a larger disparity in perf/w, outright performance, or probably both.

Didn't Raja also say only two new GPU chips would come out next year?

If AMD can fix their ROP deficiency, there won't be an issue. If Fiji had the 128 ROPs it needed, it would have destroyed GM200.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
So there is no architecture improvement from AMD. They are just relaying on Node for the efficiency whereas Nvidia Pascal will be 2X more efficient from Maxwell which is called architecture improvement and node will give them a + advantage over it.

It was bound to happen when AMD is really short of money, investors, customer and employees. AMD R&D has suffered.

I love how people keep assuming this just because the name is still GCN. You know how many generations Terascale was used, don't you? And then you're basically saying that Pascal will be something like 3-4x more efficient overall. Good luck on that one...
 

flopper

Senior member
Dec 16, 2005
739
19
76
So there is no architecture improvement from AMD. They are just relaying on Node for the efficiency whereas Nvidia Pascal will be 2X more efficient from Maxwell which is called architecture improvement and node will give them a + advantage over it.

It was bound to happen when AMD is really short of money, investors, customer and employees. AMD R&D has suffered.

never really suprised by how much ignorance there is out there.
 

Udgnim

Diamond Member
Apr 16, 2008
3,680
124
106
So there is no architecture improvement from AMD. They are just relaying on Node for the efficiency whereas Nvidia Pascal will be 2X more efficient from Maxwell which is called architecture improvement and node will give them a + advantage over it.

It was bound to happen when AMD is really short of money, investors, customer and employees. AMD R&D has suffered.

ignoring that there was architecture improvement going from Hawaii to Fury?
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Moving forward it only makes sense to produce 2 chips, a mid-range and high-end.

The mid-range can be harvested to a low-midrange SKU such as ~960/380 performance, all the way to R290/970 performance.

Low end will be obsolete as Intel pushes their iGPU further and AMD's Zen + GCN APU blaze the crappy entry dGPU bracket.

Zen.... not even close to release, not proven, no specs, no estimated TDP it will work within.... failed past hype....anyways.

Everyone said low end would be obsolete 4 years ago. Everyone said AMD's APU's would kill low end dGPU's. It didn't happen, at least not in the way people thought. The best iGPUs out right now can't beat the nearly 2 year old GTX 750 TI. Microsoft's Surface book wouldn't be able to exist with dGPU perf/w of Maxwell if it relied on Iris Pro sucking down 30-45 watts inside the tablet portion. Since both companies are likely to double perf/w with next gen, it completely makes sense to continue making 100mm2 - 140mm2 GPU's in the <75w space. GM107 is about 67% the performance of GM206. If Nvidia doubles the perf/w at the low end, we're looking at performance exactly right in between the R9 380x and R9 390 -> http://www.anandtech.com/show/9784/the-amd-radeon-r9-380x-review/11 (1080p: 34.4 X .67 X 2 = 46fps). iGPU's will absolutely not come close to this level of performance any time soon. Nvidia has made an absolute killing on the GM107 chip and I guarantee they'll double down in that market segment with Pascal.

Someone wake me when I can get 2X+ the performance of a 290x for $300

You're going to be asleep for about 3 years. :D




This is not the Nvidia Forum, nor is it VC&G. Discussion of NVidia belongs elsewhere, not here.

This is just a warning.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
Zen.... not even close to release, not proven, no specs, no estimated TDP it will work within.... failed past hype....anyways.

Everyone said low end would be obsolete 4 years ago. Everyone said AMD's APU's would kill low end dGPU's. It didn't happen, at least not in the way people thought. The best iGPUs out right now can't beat the nearly 2 year old GTX 750 TI. Microsoft's Surface book wouldn't be able to exist with dGPU perf/w of Maxwell if it relied on Iris Pro sucking down 30-45 watts inside the tablet portion. Since both companies are likely to double perf/w with next gen, it completely makes sense to continue making 100mm2 - 140mm2 GPU's in the <75w space. GM107 is about 67% the performance of GM206. If Nvidia doubles the perf/w at the low end, we're looking at performance exactly right in between the R9 380x and R9 390 -> http://www.anandtech.com/show/9784/the-amd-radeon-r9-380x-review/11 (1080p: 34.4 X .67 X 2 = 46fps). iGPU's will absolutely not come close to this level of performance any time soon. Nvidia has made an absolute killing on the GM107 chip and I guarantee they'll double down in that market segment with Pascal.



You're going to be asleep for about 3 years. :D

You're mistaken when it come to what people call "low-end" in this instance if you're talking about the 750 Ti. 750 Ti is a low-end gaming card. Basically, it's things below "GTX" and "R7" that are obsolete.





This is not the Nvidia Forum, nor is it VC&G. Discussion of NVidia belongs elsewhere, not here.

This is just a warning.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
If AMD can fix their ROP deficiency, there won't be an issue. If Fiji had the 128 ROPs it needed, it would have destroyed GM200.


If's and but's...The fact is we don't know because it never happened. The fact is nvidia beat AMD at perf/w, perf/transistor, and pref/mm2. They both maxed out TSMC's die size limit and Nvidia built the faster, more efficient chip.
 

desprado

Golden Member
Jul 16, 2013
1,645
0
0
If's and but's...The fact is we don't know because it never happened. The fact is nvidia beat AMD at perf/w, perf/transistor, and pref/mm2. They both maxed out TSMC's die size limit and Nvidia built the faster, more efficient chip.

For some people it is really hard to believe that Fury X was a lackluster high end GPU. Still some people believe that Fury X will be faster at unknown date, unknown period.





This is not the Nvidia Forum, nor is it VC&G. Discussion of NVidia belongs elsewhere, not here.

This is just a warning.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
If's and but's...The fact is we don't know because it never happened. The fact is nvidia beat AMD at perf/w, perf/transistor, and pref/mm2. They both maxed out TSMC's die size limit and Nvidia built the faster, more efficient chip.

On DX-11, lets wait and see how good NVIDIAs Maxwell will be on DX-12.

ps. Dont tell me that Maxwell cards will be irrelevant when DX-12 games will start to release, because people still buy Maxwell cards even today and many of them they will keep them for more than one year.





This is not the Nvidia Forum, nor is it VC&G. Discussion of NVidia belongs elsewhere, not here.

This is just a warning.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

gamervivek

Senior member
Jan 17, 2011
490
53
91
So there is no architecture improvement from AMD. They are just relaying on Node for the efficiency whereas Nvidia Pascal will be 2X more efficient from Maxwell which is called architecture improvement and node will give them a + advantage over it.

It was bound to happen when AMD is really short of money, investors, customer and employees. AMD R&D has suffered.

Pascal's 2x improvement is due to node change, it's basically Maxwell + mixed precision support and some other features. If that allows AMD to close the clockspeed gap, then they don't need much else.

For some people it is really hard to believe that Fury X was a lackluster high end GPU. Still some people believe that Fury X will be faster at unknown date, unknown period.

It's faster right now than the 980Ti as RS has already posted. The problem is that even with added voltages it's overclocking a third of what 980Ti is able to do.

Then the problem lies more on the software front with nvidia's gameworks and AMD's driver overhead woes.




This is not the Nvidia Forum, nor is it VC&G. Discussion of NVidia belongs elsewhere, not here.

This is just a warning.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
It's faster right now than the 980Ti as RS has already posted. The problem is that even with added voltages it's overclocking a third of what 980Ti is able to do.

Then the problem lies more on the software front with nvidia's gameworks and AMD's driver overhead woes.

They're basically the same speed, depending on the benchmark suite. As you point out though, GTX 980 TI has significantly more headroom, allowing it to easily and comfortably win nearly all the time. In addition to that, AMD fans fail to remember 980 TI is a cut-down sku. Titan X is the true GM200 and is faster than both Fury X and 980 TI. (yes I know, prices blah blah blah. We're not talking about prices. We are talking about engineering and total chip performance).




This is not the Nvidia Forum, nor is it VC&G. Discussion of NVidia belongs elsewhere, not here.

This is just a warning.


esquared
Anandtech Forum Director
 
Last edited by a moderator: