Why I'm excited about Zen/GCN + DX12

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Why? It's not far removed from what they have on 28nm. The apus will either have 8 threaded cores or 4 double thread cores and better gcn. Hbm is almost a given. Price depends on yields I guess. It's nothing drastic.

For dx12 the better config might be 4 cpu cores handling more than one thread and beefier gcn, rather than full 8 core chip with less powerful graphics.

Could an apu replace system ram with hbm2? 1 stack is 8gb. 4 is 32GB. I hope they make the right plans. Lots of interesting tech
 
Last edited:

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
Because...console ports,same reason why we (probably) won't see any other game using multiple gpus any time soon,they will stick with sli/cf.

Ashes of the singularity is the one and only game that is written for it and no other publisher/game house even hinted at doing something similar.

Hopefully it will catch on at some time but don't hold your breath.

Direct3D 12 feature levels matter as long as they're supported by consoles. From my understanding, the AMD graphics chips in consoles are compatible with feature level 12.0, as they're essentially GCN 1.1 designs. So, even if all a developer cares about is porting over the same game from consoles to PC, they should be able to make use of 12.0 features. 12.1 features aren't going to be critical, at most we'll see them used in Gameworks supported titles. This thread is about mGPU though, and...yeah, I doubt we'll see much support for mGPU. Consoles don't use it, so only a few developers will even bother.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
Direct3D 12 feature levels matter as long as they're supported by consoles. From my understanding, the AMD graphics chips in consoles are compatible with feature level 12.0, as they're essentially GCN 1.1 designs. So, even if all a developer cares about is porting over the same game from consoles to PC, they should be able to make use of 12.0 features. 12.1 features aren't going to be critical, at most we'll see them used in Gameworks supported titles. This thread is about mGPU though, and...yeah, I doubt we'll see much support for mGPU. Consoles don't use it, so only a few developers will even bother.

I'm wait and see with DX12 mGPU. I hear a lot about it.... if it works great. But I'm not hyping myself up over it for a minute.

Actually no, I wasn't aware Skylake supports the new features since I didn't bother reading the reviews of yet another 5% IPC gain SKU from Intel.

I already corrected myself in a post above, but think whatever you want.

Oh, well then edit your OP lol....
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I'm wait and see with DX12 mGPU. I hear a lot about it.... if it works great. But I'm not hyping myself up over it for a minute.



Oh, well then edit your OP lol....

Gonna have to agree with this. OP is misleading. Intel chip may or may not just sit their doing nothing idling.

OP's claim can't be verified and with new info in the thread it seems the claim is already invalid.
 
Feb 19, 2009
10,457
10
76
Post Skylake with DX12 support, Asymmetric MGPU should also work for Intel.
 
Last edited by a moderator:

Borealis7

Platinum Member
Oct 19, 2006
2,901
205
106
but what about micro-stutter and all the problems associated with multi-GPU rendering?
i'm not sure the performance boost is going to be worth it.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
but what about micro-stutter and all the problems associated with multi-GPU rendering?
i'm not sure the performance boost is going to be worth it.

FWIU it's not your typical AFR rendering like we've had before. They can actually take certain tasks and offload them to the iGPU. Similar in concept to offloading PhysX to a separate card now. So, it shouldn't create microstuttering.
 

TheELF

Diamond Member
Dec 22, 2012
4,029
753
126
but what about micro-stutter and all the problems associated with multi-GPU rendering?
That's what happened with sli/cf where each gpu builds half the image and all hell breaks out to synchronize between them.

mgpu will be totally different,have a look at this video starting at 3:40,the integrated is tasked with rendering only part of the objects, in real-time and all the time,so no issues with synchronization,these objects could even be updated at a slower pace and nobody would notice.

https://www.youtube.com/watch?feature=player_detailpage&v=9cvmDjVYSNk#t=227
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
The OPs link is a great read. I have upgraded both of my rigs below to Win 10-64. Is there any DX12 demo I can download to compare performance of my single GTX980TI in rig1 to my dual R9 290s in CF in rig2?
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
FWIU it's not your typical AFR rendering like we've had before. They can actually take certain tasks and offload them to the iGPU. Similar in concept to offloading PhysX to a separate card now. So, it shouldn't create microstuttering.

Huh. I wasn't aware of that. It sounds smarter than how Crossfire/SLI has worked up to this point; it very well could work better.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
but what about micro-stutter and all the problems associated with multi-GPU rendering?
i'm not sure the performance boost is going to be worth it.

If this shows any sign of Microstutter, I'm turning it off.

I'm so sick of microstutter. I should just take a rusty screw driver to my eyes at this point.

I'm starting to see it in console games! I couldn't stomach Halo: MCC due to the microstutter issues. The game felt like it was running at 20 FPS half the time.
 

therealnickdanger

Senior member
Oct 26, 2005
987
2
0
If the game writers don't want to write the code, then it's not going to happen, not even with Zen.

And this is why my primary gaming rig won't see Windows 10 until the games arrive and go beyond proof-of-concept. I mean, how long have we had CF/SLI? And look at how buggy and broken many homogenous multi-GPU implementations have been despite the efforts of AMD/NVIDIA! CF/SLI are only recommended in extreme, title-specific scenarios. Now we expect a miraculous implementation of heterogeneous multi-GPU?

Y'all have too high of expectations of game developers. Best case scenario is we see better implementation of SLI/CF using identical GPUs - which in and of itself worthwhile. The worst case scenario is that nothing really changes.

I_want_to_believe5.jpg
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Huh. I wasn't aware of that. It sounds smarter than how Crossfire/SLI has worked up to this point; it very well could work better.

Where implemented it will almost every time be better than Crossfire/SLI. Current multi card techniques are kludges built after the fact. Ingenious kludges, but kludges all the same. Look at SFR via Mantle on Civ Beyond Earth, dramatically lowered latency and dramatically increased smoothness. Quite the opposite of microstuttering.

DX12 "heterogenous multi gpu" should have happened a long, long time ago but there has never been an economic reason to do so despite there being obvious technical reasons.

It shouldnt be all that more difficult for engines which are already job based (in order to take advantage of multicore CPUs). Just make sure your rendering tasks are also job based and you can farm out half of them to one adapter or the other. You could even run a heuristics benchmark to find out real rendering speed of each adapter (for GPUs with disparate speeds, e.g. iGPU and dGPU) and then allocate the jobs on a sliced basis between the two of them according to their measured capability.

The problem will be whether devs implement it or not.

The only way I see mass adoption is that if the big 3 engines build it in at an engine level (Unity, Unreal, CryEngine) so that it doesn't require much additional dev time per game.
 
Last edited:

biostud

Lifer
Feb 27, 2003
20,228
7,352
136
Why not just use more space on the CPU die for CPU logic and more GPU logic on the GPU die?
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Why not just use more space on the CPU die for CPU logic and more GPU logic on the GPU die?

Yea i would take a 8-core + huge dGPU vs Quad Core + iGPU + huge dGPU.

I believe developers will first exploit x86 CPU multicore DX-12 features than use different iGPU architectures for mGPU in games.

I could only see using the iGPU for physics but again an 8-core CPU may be fine for that workload. Unless, you would really need to simulate more things but then the small iGPU will not have the compute performance needed for the task.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Why not just use more space on the CPU die for CPU logic and more GPU logic on the GPU die?

I agree too. But it seems the ship has sailed and that iGPUs and quad cores are here to stay at least for the foreseeable future. At least dx12 multi adapter gives us a glimmer of hope that we might get some value out of the iGPU silicon we've paid for
 

tential

Diamond Member
May 13, 2008
7,348
642
121
If this shows any sign of Microstutter, I'm turning it off.

I'm so sick of microstutter. I should just take a rusty screw driver to my eyes at this point.

I'm starting to see it in console games! I couldn't stomach Halo: MCC due to the microstutter issues. The game felt like it was running at 20 FPS half the time.
I'm not expecting this to work or be widely supported. We've been promised Multigpu too may times for me to care.

If it does work on a vast majority of dx12 titles even when mixing gpus... I'll invest in some knee pads and head over to a Microsoft hq restroom and perform my thanks since windows 10 was a free upgrade.
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
I'm not expecting this to work or be widely supported. We've been promised Multigpu too may times for me to care.

If it does work on a vast majority of dx12 titles even when mixing gpus... I'll invest in some knee pads and head over to a Microsoft hq restroom and perform my thanks since windows 10 was a free upgrade.

Yeah I mean with us barely able to get any effort from a lot of dev's for PC ports, expecting most of them to go the extra mile with this seems unrealistic. Probably get support on some big titles like anything with frostbite which is great but it will have even less support than current mgpu would be my guess.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Yeah I mean with us barely able to get any effort from a lot of dev's for PC ports, expecting most of them to go the extra mile with this seems unrealistic. Probably get support on some big titles like anything with frostbite which is great but it will have even less support than current mgpu would be my guess.

The only way I see devs supporting it fully is if MSFT makes it mandatory on XBone so XBone users can daisy chain consoles and finally get real 1080p/60.

Otherwise, I think publishers are still gonna shaft PC ports and devs are going to do the bare minimum before they get shoveled onto the next port-job/game-title.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Bad pc porting devs will probably continue to make bad PC ports. I'm looking forward to what PC leaders do though -- Unreal, Unity, CryEngine, Rockstar (after their excellent gta V port), Croteam, 4A Games.

I'm REALLY interested to see what PC only RTS devs like Creative Assembly do with the increased draw call capacity. Though given the bugginess of CA releases I dont expect them to get multiadapter working properly until a year after release of whatever game is their first DX12 one. I can see CA doing multi-adapter GPU though -- remember they had a special code path for Intel iGPUs using a special extension on Haswell and newer iGPUs. I wouldn't be surprised if Intel and CA worked together again to get Intel iGPUs to work multi-adapter in the next-gen DX12 CA engine.
 
Feb 19, 2009
10,457
10
76
Why not just use more space on the CPU die for CPU logic and more GPU logic on the GPU die?

Intel's Skylake is reported to be IIRC ~60mm2 for the CPU portion and the other 60mm2 is the iGPU.

That's 4 cores with hyper threading on 60mm2.

If they expand that to 300mm2, there's room for a lot more cores, but the problem becomes, so few games will utilize it, fewer apps will even care about all those core counts.. its kinda wasted.

Whereas GPU can and will use the extra capacity. So there's more potential bang for buck, having a larger APU with a big iGPU on it, to offer excellent graphics performance out of the box for a good gaming experience, but capable of contributing when gamers have a dGPU.

I look at it from a consumer/gamer's PoV, but Intel probably won't give a damn and continue to sell the smallest CPU for the maximum $ they can get away with.

Currently, Intel's approach makes sense because gamers on a dGPU rig, wouldn't folk out a lot of $ for a better APU because its idling in games but DX12 hopefully will provide an incentive for gamers to folk out more for better iGPU and thus, spur Intel to make a big iGPU to cater to the enthusiast market.

I'm sure AMD is already planning HBM2 big APU since they already mentioned it for servers, it'll filter down to consumers, as long as there's potential gains, as a gamer, I'm willing to pay more for an APU if I knew its graphics isn't wasted when I've got a dGPU.

As for devs to implement this, it's a matter of whether the big engines support it or not.
 

NTMBK

Lifer
Nov 14, 2011
10,525
6,051
136
I hope that Creative Assembly would use the IGP for some GPU compute, to speed up their CPU bottlenecks.
 

Sabrewings

Golden Member
Jun 27, 2015
1,942
36
51
The fact that this relies heavily on developer implementation takes the wind out of the sails for me. I doubt most developers will care enough to implement it well or at all.