Ashes of The Singularity DX11 vs DX12 - R9 Nano

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
So in short for DX11 vs DX12:

* DX12 favors multi core CPUs
* DX12 improves performance more for AMD CPUs than Intel CPUs
* DX12 will improve CPU efficiency in general, making it less likely that CPU performance will become the bottleneck

The AOTS benchmark shows the opposite.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
The AOTS benchmark shows the opposite.

Repeating lies is easier than accepting those results I guess. And the list of miracles that were supposed to turn Vishera into a gaming powerhouse (and failed) keeps growing.
 

dark zero

Platinum Member
Jun 2, 2015
2,655
140
106
I am quite aware what an extrapolation is. All this hype about DX12 both for how it is magically going to transform AMD cpus and GPUs is not really extrapolation, more like wishful thinking.
Remember that AMD has 8 compute cores who were working as GPU... Maybe DX12 is using that cores as coprocessors, similar than Xeon Phi, with lower performance and only to help the CPU cores.
Also they can use HSA with DX12 to complement it.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
The results in the OP in turn show differently than the AOTS benachmark. But I think we need more DX12 benchmarks to know for sure.

A YouTube video from a hardcore AMD shill in the Red Team Plus program contra a professional review site. I sure dont hope that was the entire documentation for your comment you used up there.

Take a wild guess on what Fable shows.

Repeating lies is easier than accepting those results I guess. And the list of miracles that were supposed to turn Vishera into a gaming powerhouse (and failed) keeps growing.

Yep. We still wait for all the miracles to unfold. Until then we have to see a few compression tests when fit and such ;)
 
Last edited:

MiddleOfTheRoad

Golden Member
Aug 6, 2014
1,123
5
0
A YouTube video from a hardcore AMD shill in the Red Team Plus program contra a professional review site. I sure dont hope that was the entire documentation for your comment you used up there.

Take a wild guess on what Fable shows.

Does the phrase "The pot calling the kettle black" mean anything to you?

Hint: a post from a hardcore Intel Shill lecturing people about bias when he himself is probably the worst offender of cherrypicking benchmarks.... to serve his narrative.

Just saying dude.... When the most biased person on this forum starts complaining about bias.... It really doesn't get any better than that.

I haven't had a lot of experience with DirectX12 -- but the more efficient code is probably a similar experience to what I saw going from DirectX11 to Linux.... Where a FX-8350 has trouble keeping pace with most i3's under DirectX11 (at least for gaming)...... The FX-8350 can run with an i7 3770K under Ubuntu. I'd guess the better optimized code in DirectX 12 is simply unlocking the underutilized capability of the Bulldozer derived architecture (as Linux has done for years). Had DirectX12 instead arrived in 2012 -- AMD's current standing might have been dramatically different.
 
Last edited:

2is

Diamond Member
Apr 8, 2012
4,281
131
106
I'm a bit behind on my ironing. The pile of wrinkled shirts keeps getting bigger.
 

Ventanni

Golden Member
Jul 25, 2011
1,432
142
106
Remember that AMD has 8 compute cores who were working as GPU... Maybe DX12 is using that cores as coprocessors, similar than Xeon Phi, with lower performance and only to help the CPU cores.
Also they can use HSA with DX12 to complement it.

I don't really see compute cores being used to process driver overhead.
 

TheELF

Diamond Member
Dec 22, 2012
4,029
753
126
[/I]I haven't had a lot of experience with DirectX12 -- but the more efficient code is probably a similar experience to what I saw going from DirectX11 to Linux.... Where a FX-8350 has trouble keeping pace with most i3's under DirectX11 (at least for gaming)...... The FX-8350 can run with an i7 3770K under Ubuntu.
Oh yeah? In which games?

And secondly how did you reach the conclusion that there is more efficient code? Linux has a smaller user base a smaller programmers base a smaller shared library base a smaller everything how does all that pile up into linux having more efficient code?
Maybe linux just can't cope with modern wide cores...just like h.264 or 7zip in windows.
 
Aug 11, 2008
10,451
642
126
We have been continually hearing since Bulldozer was announced how the "next great thing" was going to transform it. First it was Win 8 and the new scheduler. How did that work out? A few percent gain at best, and intel cpus gained as much or more as AMD. Then it was, oh, just wait till the software catches up. What happened? Quad core skylake has narrowed the gap and for the i7 even passed FX in the few benchmarks which AMD had a lead at one point. And then it was, just wait for the console games. They are based on 8 cores, and AMD will suddenly catch up. Still waiting. FX is less awful in a few games than it used to be, but still trails badly. Now the new magic potion for AMD is DX12. Well, lets see if that turns out any better than all the previous panaceas.
 

dark zero

Platinum Member
Jun 2, 2015
2,655
140
106
Well... AMD is screwed by default, so we don't worry about them.
If they try to improve, they are likely to fail.

Only Intel is perfecting in everything to the point that at least in their iGPU, the Iris Pro from Skylake might catch the GTX 950 (since Iris Pro from Broadwell managed to get R7 270X levels) way sooner than expected.

Maybe the console makers might give up since they didn't launch any godlike game and PC did it and well having insane sales....

And well also that could be used to unify all the current tech and only use the best of each of them. And AMD and likely nVIDIA are out of that equation.
 

Raftina

Member
Jun 25, 2015
39
0
0
I'd guess the better optimized code in DirectX 12 is simply unlocking the underutilized capability of the Bulldozer derived architecture (as Linux has done for years).
A Bulldozer CPU needs double the core count and 50% higher power draw to get the same performance as an Ivy Bridge based CPU. That is not efficient. That is horribly inefficient. You can see the inefficiency reflected in market shares:

http://www.forbes.com/sites/rogerkay/2014/11/25/intel-and-amd-the-juggernaut-vs-the-squid/

Desktop
Intel: 82%
AMD: 18%

Laptop
Intel: 90%
AMD: 10%

x86 server
Intel: 98%
AMD: 2%

AMD's market share in the x86 server space--a Linux dominated market (http://www.theinquirer.net/inquirer...cloud-market-share-at-expense-of-windows)--is about an order of magnitude lower than its desktop market share precisely because the server market is the one that cares the most about efficiency, and Bulldozer failed horribly at that.
 
Last edited:

dark zero

Platinum Member
Jun 2, 2015
2,655
140
106
A Bulldozer CPU needs double the core count and 50% higher power draw to get the same performance as an Ivy Bridge based CPU. That is not efficient. That is horribly inefficient. You can see the inefficiency reflected in market shares:

http://www.forbes.com/sites/rogerkay/2014/11/25/intel-and-amd-the-juggernaut-vs-the-squid/

Desktop
Intel: 82%
AMD: 18%

Laptop
Intel: 90%
AMD: 10%

x86 server
Intel: 98%
AMD: 2%

AMD's market share in the x86 server space--a Linux dominated market (http://www.theinquirer.net/inquirer...cloud-market-share-at-expense-of-windows)--is about an order of magnitude lower than its desktop market share precisely because the server market is the one that cares the most about efficiency, and Bulldozer failed horribly at that.
Err... is that TOO optimistic...
10% on Laptops?
and 18% on desktop?

in dGPU they are on 9% now!

So... if we update the numbers... maybe we might see AMD practically dissapeared from this planet for good.

Definately Zen won't change the game, however that means that x86 meets their limits. Intel maybe is thinking to deliver something really brutal along MS and Linux in order to change the game to their favor.... ARM crushes and dominates the Phone market and took good part of the lifespan of the Tablet market.
 
Aug 11, 2008
10,451
642
126
Pooh pooh the results all you want. If we see enough real-world AMD users benefiting from DX12 like that, hey, good for AMD users. If we don't, well too bad guys.

Also, I would like to point out, that the graphic you (ShintaiDK) posted showed no minimum fps values and does not include a Kaveri APU.

True, but if DX12 utilizes "moar cores" so well, do you really think a 4 core Kaveri APU, based on the same basic architecture as Vishera, will be faster than an 8 core FX8350?
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,583
731
126
We have been continually hearing since Bulldozer was announced how the "next great thing" was going to transform it. First it was Win 8 and the new scheduler. How did that work out? A few percent gain at best, and intel cpus gained as much or more as AMD. Then it was, oh, just wait till the software catches up. What happened? Quad core skylake has narrowed the gap and for the i7 even passed FX in the few benchmarks which AMD had a lead at one point. And then it was, just wait for the console games. They are based on 8 cores, and AMD will suddenly catch up. Still waiting. FX is less awful in a few games than it used to be, but still trails badly. Now the new magic potion for AMD is DX12. Well, lets see if that turns out any better than all the previous panaceas.

The problem is that it was a long time ago since the 4 Module / 8 Thread Bulldozer CPUs like FX8350 were released. Now people compare old AMD Bulldozer CPUs with the latest Intel CPUs, which is not fair.

It would be more interesting to compare the older AMD Bulldozer CPUs with the older Intel CPUs available at the time those AMD CPUs were released. But doing that comparison using DX12, and the latest SW and games. Then we could get a better picture of how competitive AMD's Bulldozer architecture would have been if SW would have been available at time of release to make use of its full potential.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
FX is less awful in a few games than it used to be, but still trails badly.

FX performance/$ in latest games is exceptional and with DX-12 it will be even better.

2015 games

http--www.gamegpu.ru-images-stories-Test_GPU-Retro-The_Witcher_3_Wild_Hunt_-_Hearts_of_Stone-test-w3_proz.jpg


http--www.gamegpu.ru-images-stories-Test_GPU-Action-Tom_Clancys_Rainbow_Six_Siege_Beta-test-RainbowSix_proz.jpg


http--www.gamegpu.ru-images-stories-Test_GPU-Action-Call_of_Duty_Black_Ops_III_Beta-test-BlackOps3_proz_2.jpg


http--www.gamegpu.ru-images-stories-Test_GPU-strategy-Total_War_Arena-test-arena_proz.jpg


http--www.gamegpu.ru-images-stories-Test_GPU-MMO-Armored_Warfare_-test-AW_proz.jpg
 

DrMrLordX

Lifer
Apr 27, 2000
23,197
13,286
136
True, but if DX12 utilizes "moar cores" so well, do you really think a 4 core Kaveri APU, based on the same basic architecture as Vishera, will be faster than an 8 core FX8350?

The OP linked a video showing the 7870k vs. a 4690k. Two 4t processors, head to head. Naturally the 4690k wins in DX11 and DX12, but the AMD chip gains more ground going to DX12. ShintaiDK responds with a benchmark suite with no minimum framerates and a different GPU that doesn't include either CPU used in the linked vid as a refutation of the video author's credibility.

Sorry, the data's mostly unrelated. There's no control for valid comparison.

That video might still be complete crap, but the only way to refute it is to show that the methods used were invalid, or to reproduce the same comparison and show statistically-significant differences in results. I don't know if DX12 really favors 6-8 core CPUs over 2-4 core CPUs, and judging by the results we see on i3s I'm beginning to think that that may not necessarily be the case. If there's anything that's gonna make a 7870k beat an 8370 in Ashes of the Singularity, it's going to be superior architecture and higher clockspeed. AotS seems to do just fine on 2c/4t and 2m/4t chips.

Also, if you read the video commentary, you'll notice that the author of the video claims that the studio will be introducing GPGPU acceleration for GCN iGPUs (at least; hopefully they'll support Gen8/Gen9 iGPUs as well) in the AotS beta. If that's true, I would expect Kaveri APUs to smoke any FX processor. And no, I would not expect them to use the iGPU to handle dGPU draw calls . . .

The problem is that it was a long time ago since the 4 Module / 8 Thread Bulldozer CPUs like FX8350 were released. Now people compare old AMD Bulldozer CPUs with the latest Intel CPUs, which is not fair.

It would be more interesting to compare the older AMD Bulldozer CPUs with the older Intel CPUs available at the time those AMD CPUs were released. But doing that comparison using DX12, and the latest SW and games. Then we could get a better picture of how competitive AMD's Bulldozer architecture would have been if SW would have been available at time of release to make use of its full potential.

Eh. You're trying to gloss over the fact that AMD has had problems releasing new product by focusing on technology from years ago. People compare Vishera to Skylake because AMD has (generally) not released anything faster than Vishera. Kaveri technically has more power under the , hood, but to dateonly a tiny number of software developers have bothered to avail themselves of that power.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
True, but if DX12 utilizes "moar cores" so well, do you really think a 4 core Kaveri APU, based on the same basic architecture as Vishera, will be faster than an 8 core FX8350?

Not to mention the little detail that AMDs DX12 driver only scales to 4 cores. Even AMD dont really believe in "moar cores".
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
FX performance/$ in latest games is exceptional and with DX-12 it will be even better.

No, the exceptional value is to get an i3 6100. Assuming you dont sit with a previous i3.

And then you dont have to cherry pick games to get around that FX performs like a turd overall.

We already saw the great FX performance you dream about in AOTS and Fable. How was AOTS again? That's right, an i3 beating 6 and 8 FX cores in an AMD sponsored game.

ashes-r9390x.png


Just wait...wait and then wait some more. Whats the next made up illusion that will save the FX CPUs?
 
Last edited:

MiddleOfTheRoad

Golden Member
Aug 6, 2014
1,123
5
0
Oh yeah? I

And secondly how did you reach the conclusion that there is more efficient code? Linux has a smaller user base a smaller programmers base a smaller shared library base a smaller everything how does all that pile up into linux having more efficient code?
Maybe linux just can't cope with modern wide cores...just like h.264 or 7zip in windows.

It well known that Linux is just a more compact, more efficient OS than Windows. The 64 bit version of Linux generally only needs 5-7 GB of hard drive space for the entire OS versus about 20+ GB for a Windows 64 bit install. I know many apps do run faster on Bulldozers/Visheras under linux from personal experience -- a great example is BOINC / World Community Grid.

As for linux, many old computers that are saddled with no upgrade path for Windows XP due to hardware limitations can still install a modern linux distro to stay current on security features. Again, that's due to the fact that linux is just a slimmer OS overall.

As for performance, the FX-8350 outran the i7 3770k in about half of the benchmarks under linux in this review: http://www.phoronix.com/scan.php?page=article&item=amd_fx8350_visherabdver2&num=6
 

MiddleOfTheRoad

Golden Member
Aug 6, 2014
1,123
5
0
No, the exceptional value is to get an i3 6100. Assuming you dont sit with a previous i3.

Just wait...wait and then wait some more. Whats the next made up illusion that will save the FX CPUs?

Again, your "point of view" is totally collapsing. You keep cherrypicking a single game benchmark -- AtenRa just posted 5 that are remarkably close. Any analytical person would immediately toss your benchmark as an outlier -- and use the 5 AtenRa posted to average the benchmarks.

BTW, I don't recalling cross shopping an i3 6100 back in 2012 when I purchased an FX 8320 -- because back then, an i3 6100 was CPU sperm. Not a particularly relevant comparison.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Again, your "point of view" is totally collapsing. You keep cherrypicking a single game benchmark -- AtenRa just posted 5 that are remarkably close. Any analytical person would immediately toss your benchmark as an outlier -- and use the 5 AtenRa posted to average the benchmarks.

BTW, I don't recalling cross shopping an i3 6100 back in 2012 when I purchased an FX 8320 -- because back then, an i3 6100 was CPU sperm. Not a particularly relevant comparison.

Right, I cherry pick because we all know how great the FX CPU is. I mean lets look at the market share. Every one just buying them left and right. And consider your Linux argument with servers being broadly Linux. We can just see AMDs huge market share there. Oh wait...

http--www.gamegpu.ru-images-stories-Test_GPU-Action-ARK_Survival_Evolved-test-arc_1920_proz.jpg

http--www.gamegpu.ru-images-stories-Test_GPU-Action-STAR_WARS_Battlefront_Beta-test-starwarsbattlefront_proz.jpg

http--www.gamegpu.ru-images-stories-Test_GPU-Action-Batman_Arkham_Knight__GPU_v_2.0-test-proz.jpg

http--www.gamegpu.ru-images-stories-Test_GPU-strategy-Sid_Meiers_Civilization_Beyond_Earth-test-civilizationbe_proz.jpg

http--www.gamegpu.ru-images-stories-Test_GPU-Simulator-Project_CARS_2015-test-pc_proz.jpg

http--www.gamegpu.ru-images-stories-Test_GPU-strategy-Total_War_ATTILA-test-attila_proz.jpg

http--www.gamegpu.ru-images-stories-Test_GPU-MMO-Dota_2_Reborn-test-dota2_proz.jpg


But I am sure you already know this. Just like everyone else already knew. Hence they avoid buying the FX CPUs.
 
Last edited:

TheELF

Diamond Member
Dec 22, 2012
4,029
753
126
It well known that Linux is just a more compact, more efficient OS than Windows.
It is more compact you are right there,it is more compact because it lacks everything,no API for graphics no net framework no visual c no nothing that can improve performance,if you only need straight up x86 code then sure I guess it might be a bit faster if it runs less background stuff.

As for the linux benches...does anybody even know what these programs are for and what they do?