• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

updatevideocardzAMD Polaris 10 engineering sample ‘pictured’

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Of course specific uarch paths are needed with DX12. Same reason why the fabled AOTS wont run on an Intel IGP. Because there is none.

So I assume you finally admitted that DX12 requires specific optimization for every single uarch, even as low as SKUs. And its a perfect place to do please your sponsor via.

Just wait till you see new uarchs and no patches, that will be a fun thing for older games. GCN 1.1 owners can rejoice over any console ports. But GCN 1.0, 1.2, 1.3, Kepler, Maxwell, Pascal, Gen8, Gen9, Gen10 owners can only pray to the dev gods for every game and see if they get lucky.
 
Last edited:
Of course specific uarch paths are needed with DX12. Same reason why the fabled AOTS wont run on an Intel IGP. Because there is none.

So I assume you finally admitted that DX12 requires specific optimization for every single uarch, even as low as SKUs. And its a perfect place to do please your sponsor via.

Just wait till you see new uarchs and no patches, that will be a fun thing for older games. GCN 1.1 owners can rejoice over any console ports. But GCN 1.0, 1.2, 1.3, Kepler, Maxwell, Pascal, Gen8, Gen9, Gen10 owners can only pray to the dev gods for every game and see if they get lucky.

Or maybe it is Intel's historically awesome drivers and games support?? 😀
 
Or maybe it is Intel's historically awesome drivers and games support?? 😀

It could never be Intel's fault... world renown for excellent 3d graphics along with game ready drivers and all.

It's actually going to be hilarious if DX12 developers will just skip Intel's iGPU when they do uarch specific optimizations.
 
Think about it, if DX12/Vulkan requires uarch specific optimization, which studios will have time to bother about Intel's iGPU, really.

Intel will have to really step up if they want support in modern games.

All this time we've been focused on AMD/NV for DX12 and forgot about Intel GPUs... it's going to be a mess.
 
It could never be Intel's fault... world renown for excellent 3d graphics along with game ready drivers and all.

It's actually going to be hilarious if DX12 developers will just skip Intel's iGPU when they do uarch specific optimizations.
and this is where the deal between amd and intel comes...2 players with unified hardware and most of the software companies backing them will be good will be very good
 
It could never be Intel's fault... world renown for excellent 3d graphics along with game ready drivers and all.

It's actually going to be hilarious if DX12 developers will just skip Intel's iGPU when they do uarch specific optimizations.

Well, maybe that's why they want AMD's IP?
 
No 🙂 They have to license some patents to make their iGPU's. Currently from NV, that's expiring so they're talking to both companies for obvious reasons. Given AMD's position must be good chances of getting it cheaper from them. It won't suddenly make their IGP's copies of AMD's!

As for who should support Intel iGPU's, honestly any studio wanting to sell stuff. Intel have a fair sized chunk of the market now, and it'll grow going forwards - they're starting to release bigger and bigger iGPUs.

Of course they also revise their iGPU architecture on a roughly yearly basis, so doing micro optimisations for each one would be a bit insane. Especially as your target architecture is probably 2/3 years in the future for a graphically heavy game.

Basically any DX12 exclusive game is going to need to have a DX11 equivalent abstraction layer option in it. Should be possible I guess.
 
Of course specific uarch paths are needed with DX12. Same reason why the fabled AOTS wont run on an Intel IGP. Because there is none.

So I assume you finally admitted that DX12 requires specific optimization for every single uarch, even as low as SKUs. And its a perfect place to do please your sponsor via.
I've finally admitted? Err what? When did I state the opposite?

You don't need specific paths to run DX12/Vulkan. You need specific paths to optimize for specific architectures in order to obtain maximum performance and benefits.
 
You don't need specific paths to run DX12/Vulkan. You need specific paths to optimize for specific architectures in order to obtain maximum performance and benefits.

Yup, if you ran standard and don't take into account uarch differences, you're not going to extract peak performance.

In DX11, AMD/NV can do the optimization that devs may skip or fail at, in DX12, it's on developers. Not for everyone, only for those interested in peak performance.

Basically if Intel wants their GPUs to not be dead weight in DX12, they will have to splurge some $/engineers and have their own developers relation program. Because let's face it, if someone buys a AAA game and it runs like crap on Intel iGPU, they kinda expect that, so it's not going to cause any outrage. Really, try going on the Steam forum of any major title and complaint why your Intel GPU can't run it well! lol
 
Based on Polaris 10 max size being 210mm perhaps AMD has implemented path sharing, 2 stacks per 1024bit channel, to expand HBM1 capacity to 8GB?
 
Last edited:
Yup, if you ran standard and don't take into account uarch differences, you're not going to extract peak performance.

In DX11, AMD/NV can do the optimization that devs may skip or fail at, in DX12, it's on developers. Not for everyone, only for those interested in peak performance.

Basically if Intel wants their GPUs to not be dead weight in DX12, they will have to splurge some $/engineers and have their own developers relation program. Because let's face it, if someone buys a AAA game and it runs like crap on Intel iGPU, they kinda expect that, so it's not going to cause any outrage. Really, try going on the Steam forum of any major title and complaint why your Intel GPU can't run it well! lol

AMD has had a problem with developer relations in the past. Most notably was with project cars. I think directx 12 might highlight this issue for them further. While it's clear they are investing heavily in Oxide other studios appear to be ignored by them.
 
Based on Polaris 10 max size being 210mm perhaps AMD has implemented path sharing, 2 stacks per 1024bit channel, to expand HBM1 capacity to 8GB?

Polaris does *NOT* have HBM. Both 10 and 11 are GDDR5, AMD has already stated this. Vega will be HBM2.
 
Polaris does *NOT* have HBM. Both 10 and 11 are GDDR5, AMD has already stated this. Vega will be HBM2.

Link to explicit quote? Or is this based on no listed memory type for Polaris while listing HBM2 for Vega on GDC roadmap slide?

If it's that compact with GDDR5 can't expect much OC robustness.
 
Polaris does *NOT* have HBM. Both 10 and 11 are GDDR5, AMD has already stated this. Vega will be HBM2.

No, they didn't, they danced around the question without giving a proper answer. Nothing regarding Polaris is confirmed so far except where it's made and a release window was given.
 
No, they didn't, they danced around the question without giving a proper answer. Nothing regarding Polaris is confirmed so far except where it's made and a release window was given.

Polaris 11, the GPU that was compared with GTX 950 levels of performance, is gonna get HBM. Right. Go back to sleep.
 
Polaris 11, the GPU that was compared with GTX 950 levels of performance, is gonna get HBM. Right. Go back to sleep.

When did I say that? I simply said that there was no official word from AMD on what memory we are going to see on Polaris. It could be HBM1 (less likely) or GDDR5 (more likeky), but nothing official yet.

So yea, I just corrected your statement which claimed there was word from AMD, which just isn't true.
 
When did I say that? I simply said that there was no official word from AMD on what memory we are going to see on Polaris. It could be HBM1 (less likely) or GDDR5 (more likeky), but nothing official yet.

So yea, I just corrected your statement which claimed there was word from AMD, which just isn't true.

Because they're targeting mid-2016 for release, ergo, it can only be GDDR5. And forget HBM1, it's too expensive and was only used as a test-bed for Fury. HBM2 is were the fun will be, that is, in 2017 for High-end and 2018 for mainstream.
 
Because they're targeting mid-2016 for release, ergo, it can only be GDDR5. And forget HBM1, it's too expensive and was only used as a test-bed for Fury. HBM2 is were the fun will be, that is, in 2017 for High-end and 2018 for mainstream.

And again, I don't disagree with that statement at all. However, you were wrong in saying that there is official info Polaric memory, which there isn't.
 
I would like to ask, does anyone of you know if Micron, Hynix, Samsung or anyone developed 8-Hi stacks of HBM1?

gtc2015-skhynix-23-900x730.jpg


As you can see: Tech Shrink.
 
Back
Top