• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[GDC] Developers talking about dx12.1 features

Azix

Golden Member
What do you guys think about these?

http://schedule.gdconf.com/session/...ce-just-cause-3-case-study-presented-by-intel

This session will cover the changes Avalanche made to their engine to match DX12’s pipeline, the implementation process and best practices from DX12. We’ll also showcase and discuss the PC exclusive features enabled thanks to DX12 such as Ordered Independent Transparency, and G-buffer blending using Raster Ordered Views and light assignment for clustered shading using Conservative Rasterization.

http://schedule.gdconf.com/session/...ng-for-multi-core-and-dx12-presented-by-intel

Codemasters present a post-mortem on their new rendering engine used for F1 2015 detailing how they balanced the apparently opposing goals optimizing for mainstream processor graphics, high end multi-core and DX12. The F1 2015 engine is Codemasters’ first to target the eighth generation of consoles and PC’s with a new engine architecture designed from scratch to distribute the games workload across many cores making it a great candidate for DX12 and utilise the processing power of high end PC’s. This session will show the enhanced the visuals created using a threaded CPU based particle system without increased the GPU demands and also cover the changes made to the engine while moving from DX11 to DX12. We will also discuss the graphics effects added using the new DX12 features Raster Ordered Views (AVSM and Decal Blending) and Conservative Rasterization (Voxel based ray tracing) adding even greater realism to the F1 world.

What significance is there to conservative rasterization and rovs?

What I find strange is that they are specifically singling out these features. Is it that the core dx12 has been presented on to death or what? They surely aren't using the 12.1 features on consoles so the significance should be less.

Though I guess it IS intel making the presentations. I get its a developer conference, but it seems silly since these things aren't going to run great on any intel hardware anyway.

http://www.notebookcheck.net/Just-Cause-3-Notebook-Benchmarks.156117.0.html
 
Last edited:
Outer Conservative rasterization has many applications as shown in graphics literature and ROVs are useful for multi-fragment effects ...

I agree that if your willing to get the best possible experience Intel hardware won't do it but that doesn't mean that they don't want to promote their own hardware by making use of those features ...
 
Yup, those are first new rasterization features we have seen in years and both are very useful. They can also be used to make things faster, so they are good even for Intels chips in real gaming situations.
In general it would be great to see new features added for developers, we have had some really silly limitations for a long time..
 
It's added performance which Intel needs. Despite what we enthusiast forum goers think, the vast majority of gamers do it on Intel GPUs.

I'm glad to see this and if it makes it into games, that's a bonus for PC gamers since consoles don't benefit.
 
Same old song though, a few games to "feature it" then takes years to show up in games.
Perhaps, but it's always good to get those into the wild instead of keeping features as hostage due 'no-one will ever use shaders'.

Developers will use good features exposed for them, although not always in way originally intended.

BTW, both these features are exposed in DX11.3 as well.
 
I can't wait to see what happens in the brave new DX12 world
A lot for mobile/tablets and the likes, for desktops...not so much.
Texture streaming and higher draw call counts will keep games from having large frame variances and stuttering,if they get implemented correctly, so that's a big thing but if anyone waits for multiple times the frame rates and multi cores being used(the way they think) they are going to be disappointed.
 
A lot for mobile/tablets and the likes, for desktops...not so much.
Texture streaming and higher draw call counts will keep games from having large frame variances and stuttering,if they get implemented correctly, so that's a big thing but if anyone waits for multiple times the frame rates and multi cores being used(the way they think) they are going to be disappointed.

Do you have a source for this?
 
A lot for mobile/tablets and the likes, for desktops...not so much.
Texture streaming and higher draw call counts will keep games from having large frame variances and stuttering,if they get implemented correctly, so that's a big thing but if anyone waits for multiple times the frame rates and multi cores being used(the way they think) they are going to be disappointed.

There are a staggering amount of games on the PC which could benefit from improved draw call counts. That and the improved multi-threading is really DX12's biggest immediate benefit.
 
There are a staggering amount of games on the PC which could benefit from improved draw call counts. That and the improved multi-threading is really DX12's biggest immediate benefit.

Only thing is that every game is,and will be, designed for the consoles so the max amount of draw calls is set in stone so to speak.

Again,as far as the improved multi-threading goes just look at mantle games,still slower on FX-8cores then on i3s...it only makes sense with very slow cores ,like the sub 1,5Ghz almost-arm-cores that the consoles have.
 
Only thing is that every game is,and will be, designed for the consoles so the max amount of draw calls is set in stone so to speak.
Yes, and if they use low level API on consoles the limit is order of magnitude (or two) above what developers use on DX11.
 
Yes, and if they use low level API on consoles the limit is order of magnitude (or two) above what developers use on DX11.

They already use low level on consoles,for many many years.
That's why console ports run so badly on PCs.
 
Only thing is that every game is,and will be, designed for the consoles so the max amount of draw calls is set in stone so to speak.

Again,as far as the improved multi-threading goes just look at mantle games,still slower on FX-8cores then on i3s...it only makes sense with very slow cores ,like the sub 1,5Ghz almost-arm-cores that the consoles have.

Max draw calls will not be set in stone because of consoles... Plenty of games can add higher scaling factors and additional "stuff" for PC, and have done so for years ("Ultra" setting). Anything above the console threshold won't receive the same level of developer attention, sure, but I guarantee we will see DX12 games using more draw calls than is possible on Xbone/PS4 consoles.
 
Max draw calls will not be set in stone because of consoles... Plenty of games can add higher scaling factors and additional "stuff" for PC, and have done so for years ("Ultra" setting).
Thus the so to speak...

But is it really plenty of games?
Additional filters for example do not add draw calls.
 
Max draw calls will not be set in stone because of consoles... Plenty of games can add higher scaling factors and additional "stuff" for PC, and have done so for years ("Ultra" setting).

This "additional stuff" doesn't add draw calls in many cases. Increased texture or shadow resolution, higher quality filters, even higher quality meshes do not increase the draw call amount.
 
This "additional stuff" doesn't add draw calls in many cases. Increased texture or shadow resolution, higher quality filters, even higher quality meshes do not increase the draw call amount.

Except it does. Fallout 4, go change shadow distance and measure how many draw calls you can spit out. It goes up, a lot. Any game with variable visibility distance settings (GTA V, bethesda games, Watch Dogs?, Assassins Creed?) and any game with increased object density settings have it. Of course increase resolution textures and shaders don't add draw calls. That's not at all what I'm saying...

We'll see whether I'm right or you're right when DX12 games come out and we can evaluate whether PC issues more draw calls than consoles. I am 100% sure PC will have settings in some games that issue more draw calls. Just like today. Thus disproving that console hardware is a hard limit on PC draw call count.
 
Last edited:
I'm personally more interested in what DX12 can do for the performance rather than what it can do for the graphics

I think most studios will use DX12 to keep the requirements lower. Dice for example is probably more interested in Battlefield 5 still targeting older CPUs (Phenom II, C2Q) rather than upping the bar on Skylake i7
 
I'm personally more interested in what DX12 can do for the performance rather than what it can do for the graphics

I think most studios will use DX12 to keep the requirements lower. Dice for example is probably more interested in Battlefield 5 still targeting older CPUs (Phenom II, C2Q) rather than upping the bar on Skylake i7

The one does not exclude the other,in frostbite games for example you can scale graphics waaaaaaaay down as well as waaaaaaay up.

I totally get what headfoot says and he is correct, it's just that a few games are not plenty of,most will keep console levels as maximum and only very few games will do anything extra.Just like it always was.
 
Back
Top