el etro
Golden Member
- Jul 21, 2013
- 1,584
- 14
- 81
^^ Enough with the FUD already :thumbsdown: just stop.
This.
There's so much misunderstanding and derailing here.
^^ Enough with the FUD already :thumbsdown: just stop.
Zlatan, regarding the GCN revisions, are these small enough to make optimizations for the whole graphics architecture easier, or does the rev changes from 1.0 to 1.1 and now to 1.2 require new optimized code?
I ask this because AMD is in the unique position to have both three itinerations of GCN in the same lineup, and having such diversity could hamper software optimization (both from the driver team and the game engine developers). Tonga Mantle performance comes to mind, it seemed pretty unconsistent comparing to the 1.1 or 1.0 offerings.
Don't think so.
If you ask me I think Maxwell and GCN are Tier 2.
I want information.... and here there is nothing.
Software/Emulated will always suck horribly compared to hardware.
Where is the Maxwell SRV information?
This thread is so much confusing with clear AMD Warriors.
Clear information or go home.
Why all the Dx12 tests use one GTX 980 instead of an AMD R9?
Fermi and Kepler support DX 11.1 (Tier 1 - Directx 12)
Maxwell support DX 11.3 (Tier 3 - Directx 12)
AMD GCN support DX 11.2 (Tier 2 - Directx 12)
So which is it?Don't think so.
If you ask me I think Maxwell and GCN are Tier 2.
So which is it?
Considering what you have explained, what would be the swiftest approach for delivering an optimized code for each uarch that will be DX12 capable in the new wave of game engines/games?
Sure, it will be easier if I know how to write an optimized code for each IP.Would this make a must for GPU vendors to be quick with the NDA lifting and the release of the tech specs of their new GPUs (so game devs and MS's API division are quickly able to adjust with the new SKUs)? Afaik from what you have said, GCN 1.2 and Maxwell 2 are still under the carpet in that aspect (and ironically, they are the most advanced and probably the most capable of supporting most of the DX12 feature set).
GrassFX is not an Async Crossfire tech. It simply a GPGPU concept where the iGPU can do the simulation. It is possible on D3D12, but OpenCL is more suitable for this kind of workload.Before you leave again, what are your thoughts on the feasibility of asymmetric crossfire a la 290x & 290 or 290x & apu?
Amd had a demo of grass fx that showed grass physics on an apu while the rendering was done on a dgpu.
Does d3d12 make doing that easier? And why do you think devs would/not want to implement such a system?
So which is it?
Can't prove DX12 is Mantle ✓
Can't prove glNext is Mantle ✓
Can't prove Maxwell tier 2 ✓
Can't prove GCN tier 3 ✓
Get the tempers in here under control, now. Either debate respectfully, or agree to disagree and move on.
-- stahlhart
Many major engines will have DX12 support out around the time of release. Thus they will be able release DX12 versions of their games at that point. So you can expect to see them by the end of this year.
I would love for this to be true but I am not aware of a single DX12 game announced so far and we are just 10 months away from the end of 2015.
I would love for this to be true but I am not aware of a single DX12 game announced so far and we are just 10 months away from the end of 2015. I am not buying your theory that in 10 months from now we will see a lot of AAA DX12 games. Buying today under the assumption that 960/970/980 will run DX12 games better vs. 290/290X sounds like false hope. All of those cards will be too slow.
From a practical point of view DX12 support for every current card out now is irrelevant because the best GPUs we have are just mid-range. Next gen DX12 games will mop the floor with them. Today a 970 is slower than a 290X at 1440P and above and it has a 3.5GB of VRAM impeding bottleneck. That means it's unlikely to outlast a 290X. If as you say next gen games engines will be DX12, those game engines will have even more advanced particles, textures, polygons, tessellation, shadows, etc. If these effects increase in complexity by 50-100%, those games will literally mop the floor with today's cards. Not only that but 980 itself is not much faster than a 290X/970 but costs a whopping $220-300 more. That means a 290/290X/970 owner could just sell his card in 12 months from now and invest the $220-300 savings + resale value towards a next gen $500 DX12 card.
We also know from following GPU history that a GPU with a higher DX designation was never good enough to play next gen DX games of that designation. While DX12 support sounds nice on paper today, it's not practical for those of us who follow the GPU market closely. By the time a good DX12 game actually launches, a $350 GPU will beat a GTX980. My post is my opinion but what I stated has been true for every single DX generation since MS started hyping it up.
In less than 2 quarters we should have GM200/390X that will officially cement 290X/980 as mid-range cards. So really I don't see what the big deal is about "full" or "partial" DX12 support. Besides, no GPU today can claim full DX12 support unless MS themselves confirms it and what it actually means for games. Until then, everything is just marketing from NV/AMD.
The root is actually on the consoles themselves, right now the console games are DX9-ish stuff with little to no optimizing for the hardware. Once that happens we'll see much better games that are ported to the PC side.Right now we are seeing the wave of unoptimized console ports. Devs just got a ton of power with the upgrade to the new consoles and are releasing tons of unoptimized ports. Hardware demands are higher than even when normalized to IQ. New games require an inordinate amount of GPU power relative to the IQ they produce.
Good job on showing showing your lack of knowledge ...
If you'd taken a look at the programmers guide on page 6 you can clearly see that the resource descriptors are been held in the SGPRs and is manually fetched by the Scalar ALUs ...
This means that GCN does not have constant registers for the CPU to assist in the resource binding therefore it is heralded as been truly "bindless" since only a pointer needs to be passed to be able to reference a large amount of resources and it's also supported for every resource type too ...
GCN is clearly TIER3 no matter how you spin it ...
What fib are you going to tell us next ?:twisted:
...
From a practical point of view DX12 support for every current card out now is irrelevant because the best GPUs we have are just mid-range. Next gen DX12 games will mop the floor with them.
Has Microsoft finalised the Tiers and exactly how much of a tier you must support to fall under that Tier?
In my view this thread is just adding to the confusion and misinformation, while well intentioned it's not helping much.
Has Microsoft finalised the Tiers and exactly how much of a tier you must support to fall under that Tier?
In my view this thread is just adding to the confusion and misinformation, while well intentioned it's not helping much.
Has Microsoft finalised the Tiers and exactly how much of a tier you must support to fall under that Tier?
In my view this thread is just adding to the confusion and misinformation, while well intentioned it's not helping much.
The reason why you might not find this thread helpful has to do with the fact that some people in here kept deception going on. The only people in here that might actually know something is zlatan, NeoLuxembourg, and I markedly ...
Hahaha, oh god, this make my day![]()
Mindtaker said:Fermi and Kepler support DX 11.1 (Tier 1 - Directx 12)
Maxwell support DX 11.3 (Tier 3 - Directx 12)
AMD GCN support DX 11.2 (Tier 2 - Directx 12)
Mindtaker said:Don't think so.
If you ask me I think Maxwell and GCN are Tier 2.