therealnickdanger
Senior member
- Oct 26, 2005
- 987
- 2
- 0
I'm kind of amazed that rebranding worked as well as it did.
"I would never buy a Toyota, I'm a Lexus kind of person."
I'm kind of amazed that rebranding worked as well as it did.
The funniest thing is that the 290(X) also seemed to have its image improve with the 390(X) launch. Having a big public bunch of reviews that start with saying that this 390 is basically the same as the 290 followed by a heaping of benchmarks showing the same sort of performance that had made the 290 the bargain of the half decade did a lot for its image.
I'm kind of amazed that rebranding worked as well as it did.
"I would never buy a Toyota, I'm a Lexus kind of person."
This whole AMA is really a bummer after all this hype ....
told you so![]()
This whole AMA is really a bummer after all this hype ....
To be honest, they always are.
AMD : Okay guys, we're going to be answering some of those burning questions of yours...
<Insert Polaris or Zen question here>
AMD : Sorry guys, I can't answer questions about Polaris or Zen, you know the only two things anyone cares about, I like keeping my jobToodles
![]()
That last part was new to me1) Because DCLs are useless. They've been inappropriately positioned as a panacea for DX11's modest multi-threading capabilities, but most journalists and users exploring the topic are not familiar with why DCLs are so broken or limited.
Let's say you have a bunch of command lists on each CPU core in DX11. You have no idea when each of these command lists will be submitted to the GPU (residency not yet known). But you need to patch each of these lists with GPU addresses before submitting them to the graphics card. So the one single CPU core in DX11 that's performing all of your immediate work with the GPU must stop what it's doing and spend time crawling through the DCLs on the other cores. It's a huge hit to performance after more than a few minutes of runtime, though DCLs are very lovely at arbitrarily boosting benchmark scores on tests that run for ~30 seconds.
So the Radeon Settings / CCC dichtomy is there to stay for a while longer, ah well.Sort of. We're clearly interested in doing a major feature-rich driver every year, and that is on-track for 2016 as well.
NVIDIA had success with TressFX because we designed the effect to run well on any GPU. It's really that simple. They were successful because we let them be. We believe that's how it should be done for gamers: improve performance for yourself, don't cripple performance for the other guy. The lesson we learned is that actual customers see value in that approach.
Regarding DX11 multithreading:
That last part was new to me
Regarding drivers:
So the Radeon Settings / CCC dichtomy is there to stay for a while longer, ah well.
DEVICECONTEXT_IMMEDIATE, // Traditional rendering, one thread, immediate device context
DEVICECONTEXT_ST_DEFERRED_PER_SCENE, // One thread, multiple deferred device contexts, one per scene
DEVICECONTEXT_MT_DEFERRED_PER_SCENE, // Multiple threads, one per scene, each with one deferred device context
DEVICECONTEXT_ST_DEFERRED_PER_CHUNK, // One thread, multiple deferred device contexts, one per physical processor
DEVICECONTEXT_MT_DEFERRED_PER_CHUNK, // Multiple threads, one per physical processor, each with one deferred device context
In regards to TressFX (and PureHair in Rise of Tomb Raider, ~10% perf hit on all GPUs).
I wonder if they see enough value to buy an AMD GPU over NV?
I've seen people on NeoGAF state they buy Nvidia specifically because Nvidia sabotages AMD performance and they want full compatibility. Never underestimate the consumer's willingness to side with unethical practices.
The product schedule for Fiji Gemini had initially been aligned with consumer HMD availability, which had been scheduled for Q415 back in June. Due to some delays in overall VR ecosystem readiness, HMDs are now expected to be available to consumers by early Q216. To ensure the optimal VR experience, were adjusting the Fiji Gemini launch schedule to better align with the market.
Working samples of Fiji Gemini have shipped to a variety of B2B customers in Q415, and initial customer reaction has been very positive.
In regards to TressFX (and PureHair in Rise of Tomb Raider, ~10% perf hit on all GPUs).
I wonder if they see enough value to buy an AMD GPU over NV?
Barring other incentives, I'll probably be buying an nVidia GPU despite my pretty much exclusive track record of AMD GPUs since Geforce 2. I've spent close to 5 figures on AMD GPUs over the years (obviously not just for gaming), but if for gaming an nVidia GPU gives 10 % lower FPS on average at the same price point but doesn't suffer random -50% GW related drops, that's a pretty compelling argument.
What do you reckon your upgrade cycle will be, as you switch over? To put things into perspective, would you be using Kepler cards right now?if for gaming an nVidia GPU gives 10 % lower FPS on average at the same price point but doesn't suffer random -50% GW related drops, that's a pretty compelling argument.
we havent seen anything related to gw on dx12 and if a small example of what is coming on vulkan is the locked down demo of nvidia well things are going to be interesting:sneaky:Barring other incentives, I'll probably be buying an nVidia GPU despite my pretty much exclusive track record of AMD GPUs since Geforce 2. I've spent close to 5 figures on AMD GPUs over the years (obviously not just for gaming), but if for gaming an nVidia GPU gives 10 % lower FPS on average at the same price point but doesn't suffer random -50% GW related drops, that's a pretty compelling argument.
Open source work for it has already started.No planned official support for GCN1.0 in AMDGPU driver (linux).
