AMD confirms feature-level 12_0 for GCN maximum

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
No, he said that GCN supports Feature Level 12_0 but didnt confirm that every GCN architectures would do this.

However there is an update and AMD has confirmed that only GCN1.1+ will support FL 12_0:
AMD hat nun bestätigt, dass nicht alle GCN-GPUs DirectX 12 FL 12_0 unterstützen. Ab der zweiten Generation ist dies der Fall, während die erste sich auf das Feature-Level 11_1 beschränkt
 

Noctifer616

Senior member
Nov 5, 2013
380
0
76
No, he said that GCN supports Feature Level 12_0 but didnt confirm that every GCN architectures would do this.

The translated bit clearly says "all currently available variations of GCN".

So clearly there was a mistake made before the update which the update then corrects.
 

cbrunny

Diamond Member
Oct 12, 2007
6,791
406
126
Because there are TONS of people here including you who can't read the specs ...

No ones arguing the facts ...

There are just people out in the world who is not educated enough to grasp the subject at hand ...

Don't call me stupid. I have plenty of education. None of it happens to be in Direct X, GPU drivers, or processor architecture. But I'm not stupid.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
No, he said that GCN supports Feature Level 12_0 but didnt confirm that every GCN architectures would do this.

However there is an update and AMD has confirmed that only GCN1.1+ will support FL 12_0:

AMD confirm this, AMD confirm that blah blah blah ...

Name please or official PR from AMD themselves ?
 

Noctifer616

Senior member
Nov 5, 2013
380
0
76
Don't call me stupid. I have plenty of education. None of it happens to be in Direct X, GPU drivers, or processor architecture. But I'm not stupid.

He isn't calling you stupid, he is just saying that you are not educated enough for the subject at hand which would require knowledge of GPU architectures and graphics API.

He isn't saying you are not educated at all, just not educated in that specific field of knowledge.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Don't call me stupid. I have plenty of education. None of it happens to be in Direct X, GPU drivers, or processor architecture. But I'm not stupid.

I DIDN'T say you were ...

If you felt offended my apologies but we need some people WHO can SERIOUSLY read specs and not have a bunch of journalists or those with fervent brand loyalty crapping up misinformation ...

Edit: Hell, I even expressed relief towards you when you took honest AMD PR over others ARGUING against a GRAPHICS PROGRAMMERS THOUGHT'S ...
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
But this isnt the meaning of the sentence. It's more like
"our GCN architecture is able to support the Feature Level 12_0".

In the end you need the second sentence, too. Without it you could get fooled by the translation.
 

Noctifer616

Senior member
Nov 5, 2013
380
0
76
But this isnt the meaning of the sentence.

It actually is that exact meaning and there is no misunderstanding there.

However, the AMD guy made a mistaken which was then corrected by AMD and added to the article in the update.
 
Last edited:

CU

Platinum Member
Aug 14, 2000
2,409
51
91
No matter what AMD PR says. Why would it support the missing feature in opengl and not directx?
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
No matter what AMD PR says. Why would it support the missing feature in opengl and not directx?

It doesn't ...

The feature was recently added in DirectX according to the user's at Beyond3D ...

Just some guys trying to cook up FUD is all I see ...
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
the Radeon 8500 could actually run Battlefield 2 (with playable framerate) the Geforce 3 ti 500 (faster than the 8500) and the Geforce 4 Ti 4600 (way faster) could not, due to the lack of PS1.4 support

Ha! At what settings was it faster? Are we talking it ran at 35-40 fps at 1024x768? Let's assess the reasonability of your argument here.

BF2 came out June 21, 2005. Radeon 8500 64MB came out October 17, 2001. Believe me I played games during that time on a ViewSonic 19" 1600x1200 CRT and as a previous owner of an 8500 64MB, by the point you site it was a pile of garbage. Do you know how far GPU hardware has come by that point?

June 22, 2005, nV released the 7800GTX 256MB.

7753.png


7734.png


Voodoo Power Rankings have:

Radeon 8500 (DX8.1) -- 2.1 VP

By June 2005, I bet one could buy a 6600GT for $80-100 max

Geforce 6600GT (DX9.0c) -- 6.3 VP (3X faster than 8500)

Geforce 7800GTX 256MB (DX9.0c) -- 15.4 VP (7.33X faster than Radeon 8500)
http://forums.anandtech.com/showthread.php?t=2298406

Your argument doesn't make any sense since to play games on an 8500 at that time would have required insane compromises.

as for DX9, the 9700 was the first gen of DX9 and it could run DX9 games like Half Life 2 with full settings, the trouble is that DX9 was quickly replaced by the newer versions, specially DX9c the 6 series from Nvidia supported, the X8x0 series from ATI did not, that means the X850XT 1 or 2 years after launch simply couldn't play some new games, while a 6 series could, the 6800 performance for SM3.0 was not very good, but it allowed newer games to be played with reduced settings, while the x850 would display an error message and not launch the game, or SM3.0 features to be enabled on early DX9c games (with a big cost to performance) like on Counter Strike Source (HDR)

Exact same story as above. I am not going to go digging up reviews and games. By the time SM3.0 came into play, the entire GeForce 6 stack became outdated. How do I know? I had 6600GT and I upgraded to Radeon HD4890.

But let's go with your story:

Geforce 6800 Ultra 256MB (DX9.0c) -- 10.0 VP
Radeon HD 4890 2GB (DX10.1) -- 88 VP (8.8X faster than GeForce 6800U)

Again, your argument makes no sense. Neither the 9700Pro/9800Pro nor 6800Ultra/X850XT were good enough for modern DX9 games. I had a 1600x1200 monitor which meant there is no way I could have bought a card and used it for 4-5 years as you want to imply. Most of us upgraded way more frequently in the past.

5 years from 2010 to 2015 had a lot of stability with the OS and API, and even with the Nvidia architecture overall, while in the past we were used to a lot more changes, a 480 or even 460 can play current games a lot better than a 2000 card could in 2005 or a 2005 card could in 2010, 5 years old cards are more relevant now than they used to, I've just finished Witcher 3 with a Radeon 5800 o_O

You mean Radeon 8500, not 5800? Look, if you like gaming at 800x600 or 1024x768 at 30 fps with everything on LOW, that's your choice, but don't try claiming how GTX460, 480 or especially Radeon 8500 are going to provide a good TW3 experience in games.

Fact of the matter is Fermi and Kepler cards themselves are getting outdated faster than GCN 1.0 (770 < 280X/7970Ghz, 680 < 7970/7950/280X, 780/OG Titan < 290, 780Ti < 290X) that it's not going to matter as much as DX12 tiers.

Also, look up the other functionality of Fermi and Kepler cards in the form of DX12, they are behind GCN on DX12 feature set.

If you are going to argue that Maxwell will perform better in DX12 games than GCN 1.1/1.2, it will depend on the game I bet. I would be more worried about GTX970's 3.5GB of VRAM vs. 290/290X's 4GB by the time DX12 games roll. As I said where NV will have an inherent advantage are GameWorks titles and UE4, which isn't going to be because of DX12_1.

It means GCN 1.0 doesnt support DX12 featurelist and wont do. You need GCN 1.1, 1.2, Maxwellv2 or Skylake IGP for that.

Dude, aren't you tired of constant AMD bashing? I guess Fermi, Kepler and Maxwell won't support some DX12 features either since none of those support Binding Tier 3 of DX12.

zlatan, a game developer already confirmed that GCN supports Tier 3.

rbt.jpg


dx12-features-xbox-one-pc-mobile-12.jpg


dx12-amd-nvidia-table-rs.png


In this video per MS, Tier 1 = DX 12_1. Also MS gives explanation to Tiers:
http://channel9.msdn.com/Events/GDC/GDC-2015/Advanced-DirectX12-Graphics-and-Performance

Also, you continue to discuss semantics not reality. Right now the entire GCN 1.0 stack outperforms the entire Kepler stack that it was meant to compete, at every level. In fact, Kepler performs so poorly, the OG Titan or 780 are hardly faster than the 280X. It's possible NV might shove specific DX12 features not available on GCN to purposely cripple its performance in its GameWorks/UE4-engine partnered GW titles but that's expected given how they operate nowadays.

As has already been mentioned, by the time DX12 games arrive, chances are on average they will have way more advanced graphics than today's DX11 games. No one is going to be able to play those games well on 7970 or GTX680 to start with. Pretty much the 2GB o VRAM limitation of most Kepler cards will kill them off faster anyway. By December 2011, HD7970 will turn 4 years old. Fermi is basically a write-off anyway since a $150 R9 280 at stock outperform GTX580 by 50%. Once Pascal/14nm GPUs launch next year, everything today will move 1 tier down.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Dude, aren't you tired of constant AMD bashing? I guess Fermi, Kepler and Maxwell won't support some DX12 features either since none of those support Binding Tier 3 of DX12.

zlatan, a game developer already confirmed that GCN supports Tier 3.

You feature slide is wrong and zlatan have been proven wrong as well.

You should read up on the thread before starting to attack someone, specially when you are so wrong.

This is how it is:
DX%20Feature%20Levels_2.jpg
 

nvgpu

Senior member
Sep 12, 2014
629
202
81
https://forum.beyond3d.com/threads/direct3d-feature-levels-discussion.56575/page-11#post-1847806

This picture originates from a Brazilian forum and copies a table posted on a Hungarian forum with no attributed source of information (could as well be this thread on Anandtech forum).

It's wrong on several important counts, for example: none of GCN chips support rasterizer ordered views, conservative rasterization, or tiled resources tier 3 (ie. "volume tiled resources" with Texture3D support), as shown by the D3D12 feature checking tool. GCN 1.1/1.2 cards support tiled resourced tier 2 (with Texture2D support) and feature level 12_0, GCN 1.0 "only" supports tiled resources tier 1 and feature level 11_1.

Kepler and Maxwell-1 do support resource binding tier 2

Conservative depth, SAD4 or dedicated atomic counter are not part of the latest SDK, neither is "emulated" tier 3 for tiled resources, whatever that means.
People don't even understand anything but keep posting that incorrect graph which is totally wrong.
 
Last edited:

cbrunny

Diamond Member
Oct 12, 2007
6,791
406
126
You feature slide is wrong and zlatan have been proven wrong as well.

You should read up on the thread before starting to attack someone, specially when you are so wrong.

This is how it is:
DX%20Feature%20Levels_2.jpg

I think you should post that three, maybe four more times. You know what they say. 8th time is the charm.

This thread is getting ridiculous. Vote lock.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
This is how it is:
DX%20Feature%20Levels_2.jpg

Even if the specific features for GCN 1.0 are not right on that chart, all my other points stand about GPU longevity and upgrading stand.

If you are going to argue that DX12_1 will make Maxwell more future-proof than R9 290/290X cards, we won't know until DX12 games start arriving. Considering how much $ R9 290/290X users saved not getting a 980, I don't think they are sweating it cuz they'll probably just upgrade to a $350 Pascal/Artic Islands 14nm/16nm HBM2 card late 2016 that destroys a 980 to start with in DX12 games. :sneaky:

The most important point of all: Why does the full DX12_1 feature set actually matter for older GPUs? You can't answer that at all, while Fermi and Kepler cards have been getting owned in DX11 games by GCN 1.0 and 1.1/1.2 for the last 9 months and there is no change in sight to this trend.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Even if the specific features for GCN 1.0 are not right on that chart, all my other points stand about GPU longevity and upgrading stand.

If you are going to argue that DX12_1 will make Maxwell more future-proof than R9 290/290X cards, we won't know until DX12 games start arriving. Considering how much $ R9 290/290X users saved not getting a 980, I don't think they are sweating it cuz they'll probably just upgrade to a $350 Pascal/Artic Islands 14nm/16nm HBM2 card late 2016 that destroys a 980 to start with in DX12 games. :sneaky:

The most important point of all: Why does the full DX12_1 feature set actually matter for older GPUs? You can't answer that at all, while Fermi and Kepler cards are getting owned in games by GCN 1.0 and 1.1/1.2.

Its you making up claims. Not me.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Its you making up claims. Not me.

Right, just as I thought, you have no argument how these theoretical tiers/specs actually matter. You don't really care about why it matters, as long as AMD is behind in some metric.

We buy GPUs for gaming performance and once the performance isn't good enough, we upgrade, regardless of DX12 feature sets. So you pretty much have no argument because by the time DX12 games start arriving in decent quantities (2016-2017), all Fermi/Kepler cards are outdated and cards like 970/980 will be relegated strictly to mid-range status with 14nm GPUs.

Can you even name DX12_1 games coming out? That's right, there are 0 such games known right now. DX12 games? Good ones? Nothing in 2015 by all accounts. The first real AAA DX12 title coming out is slated for 2016 and it's an AMD GE title, which means it'll run well on GCN cards.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Right, just as I thought, you have no argument how these theoretical tiers/specs actually matter. We buy GPUs for gaming performance and once the performance isn't good enough, we upgrade, regardless of DX12 feature sets. So you pretty much have no argument because by the time DX12 games start arriving in decent quantities (2016-2017), all Fermi/Kepler cards are outdated and cards like 970/980 will be relegated strictly to mid-range status with 14nm GPUs.

Can you even name DX12_1 games coming out? That's right, there are 0 such games known right now. DX12 games? You might be able to find 2 coming out this year....and one of them is a GE title, which means it'll run well on GCN cards.

What relation does this have to my posts? Could you elaborate what you actually reply to?
 

erunion

Senior member
Jan 20, 2013
765
0
0
Only thing this thread seems to confirm is how good AMD 7970 of a card is. Can't believe it was released almost 3.5 years ago and can effortlessly play any game near max at 1080p. Now compare it to gtx 580 and have a laugh. AMD's 8800GT.

This puppy has some grunt left in it.

The 580 launched a full year before the 7970 and on a different node. The 7970's competitor was the 680.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Oh, and I ALMOST forgot ...

In addition to AMD_Sparse_Texture, Min/Max texture filtering is also required for it to be equivalent to Tiled Resources Tier 2 BUT FEAR NOT for that Southern Islands too supports it according to it's ISA documentation at Table 8.12 so it's IMPOSSIBLE for SI to NOT be capable of Tier 2 Tiled Resources ...

Min/Max texture filtering was only RECENTLY added on the green team with Maxwell v2 whereas red team had it for OVER 2 YEARS on the original GCN architecture ...

EXT_sparse_texture2 is NOT a superset of Tier 2 Tiled resources since the spec did NOT define Min/Max texture filtering support in it ...

Before Haswell released, the first hardware that was capable of ROVs was GCN which released OVER A YEAR AGO before Haswell ...

It's the OTHER IHVs that had to play catch up, NOT AMD ...

https://forum.beyond3d.com/threads/directx-12-api-preview.55653/page-10#post-1840365

According to the post above by Max McMullen (A lead engineer of Direct3D), they had to DUMB DOWN the INITIAL specs for resource binding tier 2 for none other than you'd guess it, Nvidia ...

Thank Microsoft for giving Nvidia a feature level to themselves temporarily otherwise they could have just been mean and told them better luck next time instead ...
 
Last edited:

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
You feature slide is wrong and zlatan have been proven wrong as well.

You should read up on the thread before starting to attack someone, specially when you are so wrong.

This is how it is:
DX%20Feature%20Levels_2.jpg

I already see one error, Maxwell 1 is 11.2 (according to Nvidia's own website). I can't speak to anything else, but that does undermine the overall credibility somewhat...

http://www.geforce.com/hardware/desk...specifications
 

nvgpu

Senior member
Sep 12, 2014
629
202
81

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
I already see one error, Maxwell 1 is 11.2 (according to Nvidia's own website). I can't speak to anything else, but that does undermine the overall credibility somewhat...

http://www.geforce.com/hardware/desk...specifications

Any GPU can claim DX 11.2 support since there are no feature levels associated to it but the hottest feature on it is Tier 2 Tiled Resources

http://opengl.delphigl.de/gl_listreports.php?listreportsbyextension=GL_EXT_raster_multisample

I highly doubt the GM107 supports even feature level 11.1 too since it lacks support for Target Independent Rasterization as the link above shows ...
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
https://forum.beyond3d.com/threads/directx-12-api-preview.55653/page-10#post-1840525



Nice try at slandering and smearing the wrong company when an Intel employee said it was related to them, if you like to embarass yourself, go ahead though.

It's the truth, not slandering but nice try accusing me of it ...

Who gives a damn about resource binding tier 1 ...

It's resource binding TIER 2 that I'm interested in ...

And I quote from Max McMullen - "The MSDN docs are based on an earlier version of the spec. A hardware vendor came along with a hardware limitation of 64 UAVs in Tier 2 but meeting all the other specs. We (D3D) didn't want to fork the binding tiers again and so limited all of Tier 2 to 64."

It's evident that Microsoft downgraded resource binding tier 2 for Nvidia since their the only ones who uses it officially.

AMD on the other hand is Tier 3 so they don't give a crap about what other IHVs are doing ...