[Rumor (Various)] AMD R7/9 3xx / Fiji / Fury

Page 103 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

at80eighty

Senior member
Jun 28, 2004
458
5
81
supply chain clout is what Apple has.
this is an instance (if true) of a co-inventor taking advantage of it's investment in R&D&P.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Does it even matter what kind of market share a company has? Money is money. If some broke guy offers you epic cash for something are you going to refuse just because he's not Jay z? They will sell to whoever puts up the cash in the most attractive manner. I say that Purely in response to "market clout" claims.

The real reason is likely because they innovated, unlike Nvidia. With all the market share and cash Nvidia supposedly has they still just use what other companies come up with. Because they always lock even minor things they would never end up in that kind of favored competitive situation. Amd should and deserves to be favored for supply because they actually do things for progress and hbm is fruit from it.

While Nvidia was busy with gamedoesntworks and G-$ync, and made waves with hbm and mantle. Tired of people thinking the quarterly market share figures are so important they dictate everything.

Hbm could be massive beyond gpus so it's good if Amd does get something out of it. As in it's so huge we could do away with buying ram for pcs. Huge for igpus etc.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Hopefully the tech will be huge for AMD.

They've successfully moved Msft and the rest of the industry in the direction they want with API's for their future product lines. They've successfully employed the assistance of one of the worlds preeminent memory companies to help design the RAM they so desperately need. They found a way to squeeze in Freesync, which I'm fairly certain was a complete afterthought. They've just got to get to that point in the future that it all comes together and capitalize on it. Hopefully for them, in the next 12 months or so they'll have it all working together.
 
Feb 19, 2009
10,457
10
76
Hopefully the tech will be huge for AMD.

They've successfully moved Msft and the rest of the industry in the direction they want with API's for their future product lines. They've successfully employed the assistance of one of the worlds preeminent memory companies to help design the RAM they so desperately need. They found a way to squeeze in Freesync, which I'm fairly certain was a complete afterthought. They've just got to get to that point in the future that it all comes together and capitalize on it. Hopefully for them, in the next 12 months or so they'll have it all working together.

It will all come together when their APU is free from system memory bandwidth limits and when they improve CPU core IPC.

That's Zen + GCN + HBM APU.

It has the capacity to kill low-end and mid-range dGPUs altogether. With DX12 multi-GPU aware API, imagine a 200W Zen/GCN APU with 8GB HBM2, then you add a 200W class high-end GCN HBM2 dGPU on top of that, and the system will utilize the iGPU to increase your performance in games. It's something even Intel has NOTHING to counter, because a Zen APU system + GCN dGPU combo would slaughter an Intel + any dGPU system.

I'm very optimistic about their future direction specifically due to HBM and DX12.
 

flopper

Senior member
Dec 16, 2005
739
19
76
It will all come together when their APU is free from system memory bandwidth limits and when they improve CPU core IPC.

That's Zen + GCN + HBM APU.

It has the capacity to kill low-end and mid-range dGPUs altogether. With DX12 multi-GPU aware API, imagine a 200W Zen/GCN APU with 8GB HBM2, then you add a 200W class high-end GCN HBM2 dGPU on top of that, and the system will utilize the iGPU to increase your performance in games. It's something even Intel has NOTHING to counter, because a Zen APU system + GCN dGPU combo would slaughter an Intel + any dGPU system.

I'm very optimistic about their future direction specifically due to HBM and DX12.

Thats their Plan.
Its all working and the next few years they can rise to new heights.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
It will all come together when their APU is free from system memory bandwidth limits and when they improve CPU core IPC.

That's Zen + GCN + HBM APU.

It has the capacity to kill low-end and mid-range dGPUs altogether. With DX12 multi-GPU aware API, imagine a 200W Zen/GCN APU with 8GB HBM2, then you add a 200W class high-end GCN HBM2 dGPU on top of that, and the system will utilize the iGPU to increase your performance in games. It's something even Intel has NOTHING to counter, because a Zen APU system + GCN dGPU combo would slaughter an Intel + any dGPU system.

I'm very optimistic about their future direction specifically due to HBM and DX12.

That's the way I see it. Keep in mind though that the iGPU+dGPU could be something everyone uses with the new API's. It's typical of AMD to allow others to use tech they've developed. I just wish people would realize the importance of hardware compatibility between brands and appreciate what AMD does instead of chastising them.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
It will all come together when their APU is free from system memory bandwidth limits and when they improve CPU core IPC.

That's Zen + GCN + HBM APU.

It has the capacity to kill low-end and mid-range dGPUs altogether. With DX12 multi-GPU aware API, imagine a 200W Zen/GCN APU with 8GB HBM2, then you add a 200W class high-end GCN HBM2 dGPU on top of that, and the system will utilize the iGPU to increase your performance in games. It's something even Intel has NOTHING to counter, because a Zen APU system + GCN dGPU combo would slaughter an Intel + any dGPU system.

I'm very optimistic about their future direction specifically due to HBM and DX12.

Zen APUs with HBM will make the biggest impact in notebooks as there a 200 - 220 sq mm APU with a 100 -120 sq mm GPU portion combined with HBM will effectively kill off the low end GPU of similar size even with HBM. With Zen and HBM, AMD can finally design 45-55W APUs for notebooks and go up against the core i7 notebook chips with EDRAM. AMD with GCN and HBM will destroy Intel in terms of gaming/graphics and total compute performance. Zen should have competitive CPU performance to remove AMD's biggest disadvantage today. Nvidia is the company which will get hurt the most as they hold the lion's share of notebook GPU market share. An Intel notebook CPU + Nvidia GPU (even with HBM) will never be able to compete with an AMD APU with HBM in terms of perf/watt and perf/sq mm. The major advantage would be in ultrathin form factors as AMD can bring the maximum compute/graphics for sub 15W laptop chips. In roughly 4-5 years Nvidia will be forced to either sell to Intel or risk going out of business as the discrete GPU business will not be able to sustain the volumes and revenues for developing state of the art high performance GPUs on bleeding edge process nodes. The cannibilization of the discrete GPU market is a one way street. Once HBM is available on APUs the writing is on the wall for discrete GPUs especially for notebooks. For desktops there is slightly better hope as Nvidia will have the high end big die flagship compute/graphics chip and the extra TDP (250-300w) to push the envelope. For mobile GPUs no such luck as they are restricted to 100- 120w max TDP.
 
Last edited:
Feb 19, 2009
10,457
10
76
That's the way I see it. Keep in mind though that the iGPU+dGPU could be something everyone uses with the new API's. It's typical of AMD to allow others to use tech they've developed. I just wish people would realize the importance of hardware compatibility between brands and appreciate what AMD does instead of chastising them.

I don't think it would be simple for developers on DX12 to implement multi-GPU on such different architectures, such as Intel's iGPU being compatible with GCN or Maxwell/Pascal.. would not end well. :)

This is where I see the advantage AMD have, they can have the same or similar uarch in their APU & dGPU, with DX12, the iGPU would no longer be useless sitting idle like our current Intel i5/i7 dGPU rigs.

@raghu78 I know some people here predict a looming death for dGPU but it won't happen. All that will happen is the enthusiasts market have to pay much more $ for their dGPUs to make it worthwhile. It also won't ever die off if AMD has their way with APU + dGPU (or even multiple dGPU) and it will all add up to greater performance.
 
Last edited:

NTMBK

Lifer
Nov 14, 2011
10,439
5,788
136
I don't think it would be simple for developers on DX12 to implement multi-GPU on such different architectures, such as Intel's iGPU being compatible with GCN or Maxwell/Pascal.. would not end well. :)

This is where I see the advantage AMD have, they can have the same or similar uarch in their APU & dGPU, with DX12, the iGPU would no longer be useless sitting idle like our current Intel i5/i7 dGPU rigs.

DirectX abstracts away GPUs as it is- if DX12 multi-GPU does take off, then I expect it will work with mismatched ones just fine. (That's a big "if" of course...) I just hope someone builds GPU physics into their engine, and runs it on the integrated graphics.
 

Noctifer616

Senior member
Nov 5, 2013
380
0
76
DirectX abstracts away GPUs as it is- if DX12 multi-GPU does take off, then I expect it will work with mismatched ones just fine. (That's a big "if" of course...) I just hope someone builds GPU physics into their engine, and runs it on the integrated graphics.

We already know you can match Intel iGPU with AMD/NVidia dGPU's (Ashes of the Singularity supports that) so other combinations should be possible as well unless either AMD or Nvidia do something to block that.
 
Feb 19, 2009
10,457
10
76
But imagine Intel's iGPU isn't FL12 compliant, it won't help where games use those features.

Even if they implement DX12 compute based physics, Intel's iGPU can't due to lack of asynchronous compute functions.

Unless Intel comes up with a new GPU uarch, they are not going to be relevant for cross-multi-GPU DX12.
 
Feb 19, 2009
10,457
10
76
We already know you can match Intel iGPU with AMD/NVidia dGPU's (Ashes of the Singularity supports that) so other combinations should be possible as well unless either AMD or Nvidia do something to block that.

Have you got more info on that, would love to read it and see what they implement on Intel's iGPU in that RTS.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
DirectX abstracts away GPUs as it is- if DX12 multi-GPU does take off, then I expect it will work with mismatched ones just fine. (That's a big "if" of course...) I just hope someone builds GPU physics into their engine, and runs it on the integrated graphics.

Yeah, something like this. It's not like crossfire/SLI where the multiple GPU's are rendering the entire scene. You can assign specific duties to one GPU, similar to how PhysX works with multi-GPU.
 

NTMBK

Lifer
Nov 14, 2011
10,439
5,788
136
But imagine Intel's iGPU isn't FL12 compliant, it won't help where games use those features.

Even if they implement DX12 compute based physics, Intel's iGPU can't due to lack of asynchronous compute functions.

Unless Intel comes up with a new GPU uarch, they are not going to be relevant for cross-multi-GPU DX12.

You don't need async compute if you are dedicating the device to nothing but physics. Async is so that you don't "block" the graphics workload while working on the compute workload- if graphics and compute are on separate devices entirely, it doesn't matter.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
I don't think it would be simple for developers on DX12 to implement multi-GPU on such different architectures, such as Intel's iGPU being compatible with GCN or Maxwell/Pascal.. would not end well. :)

This is where I see the advantage AMD have, they can have the same or similar uarch in their APU & dGPU, with DX12, the iGPU would no longer be useless sitting idle like our current Intel i5/i7 dGPU rigs.

@raghu78 I know some people here predict a looming death for dGPU but it won't happen. All that will happen is the enthusiasts market have to pay much more $ for their dGPUs to make it worthwhile. It also won't ever die off if AMD has their way with APU + dGPU (or even multiple dGPU) and it will all add up to greater performance.

I don't predict a death for dGPU market especially in desktops as VR will keep demanding ever more GPU horsepower. But I definitely see a trend where the shrinking dGPU market (and revenues) makes it impossible or atleast damn hard to sustain designing and manufacturing these chips on bleeding edge process nodes if you depend entirely on discrete GPUs for your revenues. Intel and AMD do not while Nvidia does. Thats the fundamental difference.

I also believe software is going to gradually change and adapt to exploit the benefits of heterogeneous computing architecture with unified memory where CPU and GPU have low latency high bandwidth communication and can share pointers. We are already on a first generation of unified memory consoles. The next gen consoles will be even better with 16-32 GB of unified HBM. With AMD Zen based APUs also supporting a unified memory architecture with HBM and most likely Intel also developing their own unified memory architecture with HBM the vast majority of gaming devices will be unified memory architectures based. So the software developers are going to eventually have to improve their software design to exploit the best performance out of such architectures.
 

flopper

Senior member
Dec 16, 2005
739
19
76
I also believe software is going to gradually change and adapt to exploit the benefits of heterogeneous computing architecture with unified memory where CPU and GPU have low latency high bandwidth communication and can share pointers. We are already on a first generation of unified memory consoles. The next gen consoles will be even better with 16-32 GB of unified HBM. With AMD Zen based APUs also supporting a unified memory architecture with HBM and most likely Intel also developing their own unified memory architecture with HBM the vast majority of gaming devices will be unified memory architectures based. So the software developers are going to eventually have to improve their software design to exploit the best performance out of such architectures.

Unless someone pays the developer to not do it.
if its not a good thing for their technology then they will try to make sure none use that approach.
 

DeathReborn

Platinum Member
Oct 11, 2005
2,786
789
136
Ahh, but if they buy up all of it and use it, then others can't. For example, the guy who made Softsoap bought multiple years' production of the right pumps for the soap dispensers so the product couldn't be duplicated until it established itself.

If AMD can buy the entire HBM2 supply then yields must be horrible. It's feasible but unlikely that Nvidia could stick to GDDR5 for 2016 and go to HMC in 2017 (maybe late 2016).

That is a bit OT though so to get back OT...

Does anyone really think there will be 8GB HBM1 variants of Fury? I just don't see it myself.
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
If AMD can buy the entire HBM2 supply then yields must be horrible. It's feasible but unlikely that Nvidia could stick to GDDR5 for 2016 and go to HMC in 2017 (maybe late 2016).

That is a bit OT though so to get back OT...

Does anyone really think there will be 8GB HBM1 variants of Fury? I just don't see it myself.


My guess, and that's all it is, is that AMD will launch HBM2 with 8G. SK Hynix just announced ramping up production of HBM1.
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
bc523c6d55fbb2fbddf7c3fb4a4a20a44423dcbc.jpg