• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[Rumor - WCCFTech] AMD Arctic Islands 400 Series Set To Launch In Summer of 2016

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
For 2016, what do you guys think about AMD shrinking Nano/Fury/Fury X and making them into mid-range $299-429 cards?

With perf/watt improvements, HDMI 2.0/DP1.3, this could work but I think the 4GB HBM1 vs. 6-8GB GDDR5X vs. the competitor's offering could be a huge marketing failure for AMD if they follow this route.

If AMD ditches Fiji entirely, it doesn't make sense to me given how much it costs to design new GPUs of this complexity and AMD doesn't seem to have the funds to do a full top-to-bottom stack with all new chips. Then how do they overcome the 4GB HBM1 vs. 6-8GB GDDR5X marketing? I feel like this could hurt AMD big time next gen.

I would be fine with Fiji being shrunk, but isn't there supposed to be a new architecture for the 400 Series?

Anyways, I do want to see HBM2 make it's way from midrange and up, because it should help reduce it's cost for all those products.
 
Fiji can't be reused. 4GB VRAM midrange, AFTER an 8 GB VRAM midrange is just a disaster. 6GB VRAM+ needs to be midrange. FIji isn't being reused unless AMD really doesn't care about even attempting to actually compete with Nvidia.

Edit:
Best Bet actually would be to use new chips from the R9 390x, Fury, and Fury X. But use fiji and older chips for the cards below it. That way, your midrange higher end product can have 8GB VRAM, but the lower end midrange product can have 4GB VRAM, and be targeted at 1080p/1440p, while the other cards can handle 4K.

Also I personally couldn't care less if the new 390 replacement didn't have the updated standards and used the Fiji current stuff. Not like the new cards will handle 4K above 60 hz or that there are monitors out there that I can realistically use 4K above 60 hz anytime soon. Sadly, I know they won't make a 50+ inch monitor with 4K 60+hz for another 2 years at least.
But well, I was wrong with 50 inch freesync monitors maybe higher refresh rate monitors for 4K for large monitors will come faster than I think!
 
Last edited:
AMD needs the latest tech in a superior package. They're already on top of the technology. They just need to learn how to market a premium product.
 
For 2016, what do you guys think about AMD shrinking Nano/Fury/Fury X and making them into mid-range $299-429 cards?

With perf/watt improvements, HDMI 2.0/DP1.3, this could work but I think the 4GB HBM1 vs. 6-8GB GDDR5X vs. the competitor's offering could be a huge marketing failure for AMD if they follow this route.

If AMD ditches Fiji entirely, it doesn't make sense to me given how much it costs to design new GPUs of this complexity and AMD doesn't seem to have the funds to do a full top-to-bottom stack with all new chips. Then how do they overcome the 4GB HBM1 vs. 6-8GB GDDR5X marketing? I feel like this could hurt AMD big time next gen.

If AMD Shrink Fiji, based on the 2x Perf / Watt, they could get roughly Nano performance at 75 Watts. That is low end in terms of power so they could easily have two tiers above Fiji for their mid range and high end parts. 4GB on the low end is enough but would the cost of HBM and the die size of Fiji (even shrunk) make it prohibitive?
 
I have the feeling those two new GPUs Raja Koduri talked about are going for the high-end high-margin cards above the $300 mark.
Fiji is not efficient for 1080p so i dont see AMD using Fiji (ported to 16nm)for sub $300 cards in 2016.
 
The smart move, imo, if they only plan to have 2 dies for most of 2016 is have one moderately sized one optimized for mobile, i.e. next gen Pitcairn, and one big die desktop, i.e. next gen Fiji. Make the most of Nvidia having to deliver its big die chip to HPC customers first to fulfill contracts by prioritizing desktop volume delivery over non-contractual HPC.
 
Last edited:
Neither Nvidia nor AMD probably won't have any 16nmFF offering for us dirty 1080p gamer/$250 peasants in 2016. They're just gonna tell us "Yo buy that R9 380/390 or upgrade your wallet".
 
I have the feeling those two new GPUs Raja Koduri talked about are going for the high-end high-margin cards above the $300 mark.
Fiji is not efficient for 1080p so i dont see AMD using Fiji (ported to 16nm)for sub $300 cards in 2016.

all info have pointed to a whole new line up and amd has stated such.
the 14/16nm node offers power savings and whole new designs to small factor systems and power users. really a big thing for 2016.
 
If AMD Shrink Fiji, based on the 2x Perf / Watt, they could get roughly Nano performance at 75 Watts. That is low end in terms of power so they could easily have two tiers above Fiji for their mid range and high end parts. 4GB on the low end is enough but would the cost of HBM and the die size of Fiji (even shrunk) make it prohibitive?

Well, the Nano uses ~200W in games according to TPU's testing, so even if you got 2x perf/watt it'd still probably be >100W at Nano performance level. Other parts on the board would use the same amount of power, which would hurt it overall.

I just can't see a shrunk Fiji being cost effective in the sub $300 market. Even with the die shrink, you still have to pay the cost of the four stacks of HBM1, and your interposer and package is going to be close to the same size it is now even if the Fiji die is ~300mm^2. That's outside the issue of how much cost savings you get on the die itself. There was talk recently that the cost per transistor for 16/14nm isn't really much lower than for 28nm at this point, so while a shrunk Fiji die would be smaller, it might not necessarily be that much cheaper. How much that's changed in the last couple months I don't know.

IMO, if they want a 16nmFF pipe cleaner, Tonga would make a better choice. It's still GCN 1.2, and the die is almost half the size which would hopefully keep initial yields good. They would get the extra experience with the process, it'd still be a card that's probably 40% faster that a 960 but with GTX 950 power consumption. Even if the die costs are close to the same, the board costs could be lower with the reduced power consumption. That kind of performance at the <$200 price level would be very appealing, even if it doesn't hit the market until early in the new year.
 
A GPU in the 4-5 billion transistor range (Full Tonga or a bit bigger) should be ~210-240mm2 on GF (Samsung) 14LPP. Apple's A9 is supposed to be >2 billion at ~96mm2 on Samsung 14LPP, might even be able to squeeze a next gen Hawaii into that low-mid tier die since GPUs are denser than CPUs.

212mm2 (Pitcairn) at Apple A9 transistor density is at least ~4.4 billion transistors.

Basically we GPU buyers should be a bit upset if we aren't seeing 50%+ performance in all GPU categories with the node shrink.
 
Last edited:
I think a reworked fiji, but not fiji. I think using fiji would be a mistake.

Ya, you are right because with their push for HDMI 2.0 and DP1.3, it would be a facepalm moment to reuse HDMI1.4a/DP1.2 Fiji in the $300-500 range next gen.

inb4 the performance per watt literally just means that nothing will change except for that TDP that will be halved.

I just hope that I'm overly pessimistic here.

Nah. Graphics card volume unit sales in 2015 are near decade low, if not near all-time low. There has to be incentive to upgrade for people using older GPUs. If someone skipped R9 200/300/Fiji series, do you think they'll suddenly be excited to buy a new GPU just to save on power? There also has to be incentive for cutting edge early adopters and that comes from more performance.

I would be fine with Fiji being shrunk, but isn't there supposed to be a new architecture for the 400 Series?

Anyways, I do want to see HBM2 make it's way from midrange and up, because it should help reduce it's cost for all those products.

True. If they have the funds to do it, it's probably best to scrap Fiji gen 1 chips entirely and redesign them to include HDMI 2.0/DP 1.3 + 3rd gen GCN architecture (AMD still considers that there were only 2 real generations of GCN not 3 like all the media talks about).

Fiji can't be reused. 4GB VRAM midrange, AFTER an 8 GB VRAM midrange is just a disaster. 6GB VRAM+ needs to be midrange. FIji isn't being reused unless AMD really doesn't care about even attempting to actually compete with Nvidia.

From a marketing point of view, if AMD goes 4GB and the competitor has 6-8GB, it could be a real disaster for them. That's why I mentioned how risky I think reusing Fury, Nano, Fury X could be. At the same time, HBM1 is limited to 4GB so to use 8GB for mid-range, they'd have to use HBM2 (too expensive) or go with GDDR5X. But going with GDDR5X requires an all new memory controller redesign for Fiji. Seems like that would get costly and complex real fast.

That's why I have no clue what they might use to fill in the $249-449 price bracket with.

Edit:
Best Bet actually would be to use new chips from the R9 390x, Fury, and Fury X. But use fiji and older chips for the cards below it. That way, your midrange higher end product can have 8GB VRAM, but the lower end midrange product can have 4GB VRAM, and be targeted at 1080p/1440p, while the other cards can handle 4K.

But 390/390X are slower than Nano/Fury/Fury X. So if you have AI flagship replacing Fury/Nano/Fury X, if you move down Fiji to the $249-449 range, suddenly you run into this 4GB limitation I am talking about.

Also I personally couldn't care less if the new 390 replacement didn't have the updated standards and used the Fiji current stuff.

Ya, but for the majority of the market they'll label it old tech. Not having HDMI 2.0 would do serious damage in 2016 and AMD is trying to push FS over HDMI unless price/performance is out of this world. Don't forget the UVD in 300 series is going to be very outdated by 2016 standards.

AMD needs the latest tech in a superior package.

I agree. They need to have upgraded UVD, HDMI 2.0, DP1.3. The old ATI/AMD was almost always leading with 2D/3D visuals/latest codec support, HD4800 brought 7.1 pass-through over HDMI, and AMD was first to adopt cutting edge DP standard, etc. They need to get back to leading on features.

If AMD Shrink Fiji, based on the 2x Perf / Watt, they could get roughly Nano performance at 75 Watts. That is low end in terms of power so they could easily have two tiers above Fiji for their mid range and high end parts. 4GB on the low end is enough but would the cost of HBM and the die size of Fiji (even shrunk) make it prohibitive?

Ya, but then you are now labeling Nano as low end, aren't you? I think that's a bit optimistic for 2016.

I have the feeling those two new GPUs Raja Koduri talked about are going for the high-end high-margin cards above the $300 mark.

That's what I am thinking one high-end, one mid-range though because otherwise how would they fill the low end $249 and below market? Reuse Tonga XT/390/390X again?

The smart move, imo, if they only plan to have 2 dies for most of 2016 is have one moderately sized one optimized for mobile, i.e. next gen Pitcairn, and one big die desktop, i.e. next gen Fiji. Make the most of Nvidia having to deliver its big die chip to HPC customers first to fulfill contracts by prioritizing desktop volume delivery over non-contractual HPC.

Wow, that's actually pretty ingenious. Didn't think of that. :thumbsup:

Using the strategy you outlined they would be able to create a lot of cut-down GPUs from both of those filling in the high-end on the desktop with cut-down big die like they have Nano/Fury/Fury X now while the Pitcairn mid-range chip would be their mid-range desktop and flagship mobile dGPU. What about the lowest end $199 and below?

IMO, if they want a 16nmFF pipe cleaner, Tonga would make a better choice. It's still GCN 1.2, and the die is almost half the size which would hopefully keep initial yields good. They would get the extra experience with the process, it'd still be a card that's probably 40% faster that a 960 but with GTX 950 power consumption. Even if the die costs are close to the same, the board costs could be lower with the reduced power consumption. That kind of performance at the <$200 price level would be very appealing, even if it doesn't hit the market until early in the new year.

Tonga seems like a disaster. Larger die than Tahiti and has 384-bit memory controller that's unused. If they are going to reuse any GPU from the old stack, might as well add HDMI 2.0/DP1.3 and shrink Hawaii. Hawaii is still a powerhouse as far as performance is concerned. 390X is about 50% faster than the 380X and their die sizes are only about 20% apart iirc (438mm2 vs. 365mm2). Aging UVD would be a sore point though.

It will be 80-100% as always with new node.

Ya, in this case also the biggest breakthrough in memory bandwidth with flagship cards hitting 800GB/sec-1TB/sec and both coming out with new/heavily revised GPU architecture. I think 80-100% @ 4K for flagship cards is realistic but they might split the gen into two halves by giving us 30-40% in 2016 and another 30-40% in 2017.
 
Last edited:
Tonga seems like a disaster. Larger die than Tahiti and has 384-bit memory controller that's unused. If they are going to reuse any GPU from the old stack, might as well add HDMI 2.0/DP1.3 and shrink Hawaii. Hawaii is still a powerhouse as far as performance is concerned. 390X is about 50% faster than the 380X and their die sizes are only about 20% apart iirc (438mm2 vs. 365mm2). Aging UVD would be a sore point though.

That's running under the assumption that they would want to use something a little more modern than GCN1.1. Ideally they'd use the updated GCN with x265 they have planned for the APUs, but who knows if that's ready to roll.
Really, they don't have a great die ready to go right now to use as a pipe cleaner. Everything is either massive or old. When AMD moved to 40nm, they launched the HD4770 with a 137mm^2 die a half a year before they really went into 40nm with the 334mm^2 5870. The only thing smaller than Tonga is either the tiny GCN1.1 Bonaire, or GCN1.0 parts.

It'd be nice if they'd just roll up a new part, something along the lines of a 24CU (1536 shader) GCN1.2 part with a 256-bit bus, 4GB GDDR5, x265 and HDMI2.0, etc rolled into a nice 130-140mm^2 die and a $150 price, but I don't see that happening.
 
Last edited:
I think the two dies are a High DP product at a higher (and crazy) price bracket, and then a replacement for Fiji with moderate price increase and small performance gain over Fiji. Given how expensive HBM2 has to be it makes sense.

That's why I have no clue what they might use to fill in the $249-449 price bracket with.

28nm Rebrands? As mentioned before I suspect Arctic Islands only supports HBM2, unlike Pascal which supports both. The Pascal products in that price range will probably be GDDR5X.
 
So there is no architecture improvement from AMD. They are just relaying on Node for the efficiency whereas Nvidia Pascal will be 2X more efficient from Maxwell which is called architecture improvement and node will give them a + advantage over it.

It was bound to happen when AMD is really short of money, investors, customer and employees. AMD R&D has suffered.

Actually Arctic Islands is definitely a step up in the architecture itself. This isn't optional for AMD, GCN 1.x has reached its full design potential and it can't be stretched any more.

From what I've heard they have a new ISA and the AI GCN is basically GCN 2.0, though I don't know all of what that entails. This could very well mean some driver growing pains, though, but it really seems like AMD has invested heavily in their drivers of late, and incorporating a new ISA with the beauty that is AMD's AtomBIOS may be much less painful than it would seem like it should be.

My guess is that AMD will not aim for the clockspeed crown at all, but will go the other way and improve per SP efficiency and performance. Scaling isn't linear (though it's quite good), so they will benefit more from improving each compute unit. I have many ideas as to what they could do, but no idea what they have done.
 
That's why I was saying, AI would replace the Fury X, Fury, Nano, and R9 390x.

That way, the AI chip would extend to the R9 390x line and they'd have a at least 6-8GB in that card. Then, the R9 390 would be the Fiji card.

There are supposed to be TWO chips right? So high end chip, and cutdown highend chip (Fury X and Fury) and then the second chip as the R9 390x? maybe

Not sure how it'd work, but I'd think the best way would be to extend the AI line down into the R9 390x (not 390) so at least the higher end 400 level chip has a good amount of VRAM, and then the R9 390/490 would have less VRAMa s it'd be targeted at 1080p.
 
If they shrink the current lineup 380-390x with some updated features like HDMI 2.0 etc and bump them all down a notch in price that might work for next year. It would be expected that these shrunk chips can clock much higher at lower power levels, maybe 30% higher clocks at 20% less power?

480 $150
480x $180
490 $230
490x $300

1200-1400 mhz core clocks stock. The AI chips could fill the $400-750 range. Just a guess, hard to expect a complete top to bottom line of new architecture with a huge node shrink.
 
If they shrink the current lineup 380-390x with some updated features like HDMI 2.0 etc and bump them all down a notch in price that might work for next year. It would be expected that these shrunk chips can clock much higher at lower power levels, maybe 30% higher clocks at 20% less power?

1200-1400 mhz core clocks stock.

Honestly with AMD, major clock speed increases is the last thing I expect.

April 2009 HD4890 = 850mhz on 55nm
Sept 2009 HD5870 = 850mhz on 40nm
Dec 2010 HD6970 = 880mhz on mature 40nm

^ None of these cards overclocked well.

Dec 2011 HD7970 = 925mhz on 28nm

That means going from 55nm to 28nm, AMD only managed to increase GPU clocks 8.8%!
Sure, you can say that HD7970 was conservatively clocked but that's the point -- 16nm node is new and it stands to reason like 28nm AMD might be reluctant to go for the moon with higher clocks right away.

Look how bad it is - moving from April 2009 to Dec 2011 on the AMD side netted just an 8.8% increase in GPU clock (75mhz)

Then what happened?

HD7970Ghz (1050mhz) -> 290X (1000mhz) -> Fury X (1050mhz)

Then from December 2011 to June 2015, AMD squeezed just 13.5% higher clocks. In summary then, from 55nm HD5870 to 3rd gen 28nm Fury X, AMD managed to increase GPU clocks just 27%.

I think your optimistic outlook for 30% higher clocks for 2016 AMD GPUs is not realistic given AMD's historical track record since 2009.

480 $150
480x $180
490 $230
490x $300

It seems confusing to me to name products with their respective prices as you did. I'd imagine 390/390X level of performance would migrate to R9 480/480X level, Fury/Nano/Fury X level of performance would then migrate to R9 490/490X level.

When we look at HD5850/5870 vs. 6850/6870 and then 7950/7970/7970Ghz -> R9 280/280X and then R9 290/290X -> 390/390X, it seems AMD's goal is to have next gen mid-range cards with last gen flagship performance or roughly so.

That means at minimum we should expect Fury/Nano/Fury X performance to become next gen's replacement for $329 R9 390/$429 R9 390X. Hence my dilemma of how it's going to happen if they are stuck using 4GB HBM1 when the competitor may have 6-8GB GDDR5X?

The AI chips could fill the $400-750 range. Just a guess, hard to expect a complete top to bottom line of new architecture with a huge node shrink.

Problem is they used this strategy of having the latest and greatest chips in the $400+ range with R9 290/290X and then Fury/Fury X and it meant their older outdated in perf/watt products got destroyed in the desktop and esp. mobile dGPU space. I don't think they can get away with that in 2016 because they still do not have any viable mobile dGPU products to sell. They are going to need some lower and mid-range AI chips for laptops.

Do you think AMD will have a leg up on HBM2 since they already have experience with HBM1?

With rumours stating that AI is due for summer 2016, and the competitor already claiming that they are aiming for next gen's cards to have up to 1TB/sec HBM2 specs, I am going to estimate that AMD will not have any key advantage in this area. Also, AMD's higher memory bandwidth/bus width on R9 380/380X/390/390X/Fury/Fury X isn't helping them against the competition, suggesting AMD has a less efficient memory bandwidth compression vs. the competition. That makes me think that even if AMD brought HBM1/HBM2 down to mid-range $200-400 tiers, that in itself wouldn't likely benefit them much against the competing GDDR5X products.
 
Last edited:
For 2016, what do you guys think about AMD shrinking Nano/Fury/Fury X and making them into mid-range $299-429 cards?

With perf/watt improvements, HDMI 2.0/DP1.3, this could work but I think the 4GB HBM1 vs. 6-8GB GDDR5X vs. the competitor's offering could be a huge marketing failure for AMD if they follow this route.

If AMD ditches Fiji entirely, it doesn't make sense to me given how much it costs to design new GPUs of this complexity and AMD doesn't seem to have the funds to do a full top-to-bottom stack with all new chips. Then how do they overcome the 4GB HBM1 vs. 6-8GB GDDR5X marketing? I feel like this could hurt AMD big time next gen.

I think this is a great idea, but instead of going HBMI, or HBMI2, why not just use GDDR5x?

HBMI2 is so expensive that it should be primarily for the enthusiast cards, at least for the initial release of this generation.
 
I think this is a great idea, but instead of going HBMI, or HBMI2, why not just use GDDR5x?

HBMI2 is so expensive that it should be primarily for the enthusiast cards, at least for the initial release of this generation.

Because Fiji's memory controller is HBM1, not GDDR5. The controllers are completely different, and die size wise the 4096-bit HBM controller is barely larger than Tonga's 384-bit:
http://wccftech.com/amd-fiji-die/

As crazy as it sounds, I honestly think they might still use 4GB HBM1 for $329-429 GPUs because this gen they made the argument that 4GB HBM was enough for 4K. In 2016, they'll say that 4GB HBM1 is enough for 1080-1440p.

Since they have engineers optimizing VRAM usage for Fiji already, they can keep the same employees optimizing 4GB HBM1 usage for 2016 $329-449 cards for 1080/1440p. I am just looking at it from a point of view where AMD doesn't have the $$ to do a full top-to-bottom line-up. But if they do, well none of this speculation on my part matters.
 
Last edited:
I don't know if we'll see it but remember AMD mentioned a chip that would share a HBM channel with 2 stacks was feasible, which would let them put 8GB of HBM1 on mid range.
 
That's why I was saying, AI would replace the Fury X, Fury, Nano, and R9 390x.

That way, the AI chip would extend to the R9 390x line and they'd have a at least 6-8GB in that card. Then, the R9 390 would be the Fiji card.

There are supposed to be TWO chips right? So high end chip, and cutdown highend chip (Fury X and Fury) and then the second chip as the R9 390x? maybe

Not sure how it'd work, but I'd think the best way would be to extend the AI line down into the R9 390x (not 390) so at least the higher end 400 level chip has a good amount of VRAM, and then the R9 390/490 would have less VRAMa s it'd be targeted at 1080p.

It depends by what you mean by replace the 390x and Fury. Performance wise, I'd hope so, but it terms of absolute placement I don't see anything like Fury coming out for awhile.

I really expect the two new GPUs to be replacements for Tahiti and Pitcairn. I'm not sure we'll see an analog to Cape Verde or if that ~100mm^2 mainstream card will just get pushed to a 14nm APU.
Something along the lines of a 10B transistor ~350mm^2 HBM2 big chip, and a 6-7B transistor 200-250mm^2 smaller chip. They'll outperform Hawaii and Fury by a decent margin owing to the newer architecture and greater number of transistors, but not by a huge amount. I would guess max 50%. I expect will have to wait a least a couple years before we see a 600mm^2 replacement for Fiji in their stack, hopefully less than the 3.5 years between Tahiti and Fiji.

Pure speculation on my part with no credible source though, so take it for what it's worth.
 
It depends by what you mean by replace the 390x and Fury. Performance wise, I'd hope so, but it terms of absolute placement I don't see anything like Fury coming out for awhile.

I really expect the two new GPUs to be replacements for Tahiti and Pitcairn. I'm not sure we'll see an analog to Cape Verde or if that ~100mm^2 mainstream card will just get pushed to a 14nm APU.
Something along the lines of a 10B transistor ~350mm^2 HBM2 big chip, and a 6-7B transistor 200-250mm^2 smaller chip. They'll outperform Hawaii and Fury by a decent margin owing to the newer architecture and greater number of transistors, but not by a huge amount. I would guess max 50%. I expect will have to wait a least a couple years before we see a 600mm^2 replacement for Fiji in their stack, hopefully less than the 3.5 years between Tahiti and Fiji.

Pure speculation on my part with no credible source though, so take it for what it's worth.

Just my speculation on what the chips will be used for.
So in my case.
Big chip would be used to replace Fury X.
Cutdown Big Chip would be used to replace Fury.
Then, the smaller chip would be used to replace the R9 390x.

Fury X would be shrank, updated, and replace the R9 390.

This would lead to a lineup of:
Fury X Update: 8GB VRAM HBM2
Fury Update: 8GB VRAM HBM2
390x Update (6GB VRAM HBM2 (assuming the smaller chip can handle more than 4 gigs of HBM2 VRAM).
390 Update (Die Shrink FUry X with updated graphics ports.)
380 Update (Die Shrink Fury)

That way, the only "New" chips that AMD needs to release are 3 new chips, and they can just reuse the rest of the lineup.

Anyway, I'm just theory crafting myself, we'll see more, but that's what I would do in their position.

With HBM across the lineup, the cards would all be small too compared to the GDDRX5 cards which would require more work to be smaller. So perhaps the R9 380 and below will all be Nano sized cards as larger wouldn't help to benefit those cards much beyond OC headroom?

Speculating, but I believe Fury Nano and Fury X are only signalling what AMD is planning to do. Emphasis on cooling, perf/watt, and small cards for mini builds. This is also what I believe is important for the future though, so again, just my opinion.
 
That's what I am thinking one high-end, one mid-range though because otherwise how would they fill the low end $249 and below market? Reuse Tonga XT/390/390X again?

Yea i was thinking the new 16nm 2016 GPUs will take the $300 to $1000+ segment. They will replace the R9 390/390X, and all Fiji cards.

For sub $300 in 2016 Tonga will be fine, R9 380X at $200-$220 and R9 380 at $150-$180.

For 2017 ZEN APUs will cover the up-to $100 dGPUs (if they will use HBM) and a new bigger GPU will replace the high-end $650+ GPUs and force the rest to the next lower segment.
So what we had at $300-500 range in 2016 will fall to the 180-350 range in 2017. Also the $500-$1000 2016 GPUs will fall one segment bellow.
 
Back
Top