Poor Ryzen Performance in Business Applications

slashy16

Member
Mar 24, 2017
150
57
71
We are currently looking to replace 18 workstations that run a combination of Lightroom, Solidworks and Revit and possibly Premier Pro. The systems are currently all running Xeon E3-1225 V3's(hasewell QC). My first thought was TR 1920x, Ryzen 1800x or SKL-X. It's a pain in the ass finding reliable realworld benchmarks for these applications. I went through pugetsystems last week and noted AMD chips aren't doing well in the applications we will be using. What is the cause of the poor performance? Application optimization or?


https://www.pugetsystems.com/all_articles.php







 

PhonakV30

Senior member
Oct 26, 2009
970
332
136


hmm , they need to make it better!

Solidwork is all about Single thread except Rendering. This link

This link shows Strong Ryzen's Performance : V-ray RT

That Lightroom's Link is all about performance between CC / 2015 Version , Some Test show weak result from New Version , like this one
even Ryzen 1800 is on par with Intel 7900X ==> Link
So Don't expect too much from Ryzen when ST / Clock / AVX 512 is on High Priority .
 
Last edited:

TheELF

Diamond Member
Dec 22, 2012
3,033
305
126
Do they use the Intel Compiler?
Everything and anything uses the intel compiler,amd uses it for all of it's drivers.

Ryzen only has high IPC,which means that they need software that uses a lot of instructions per cycle to run at (close to) full potential.
Intel has spend huge amounts of money at research to make their cpus run anything well,even software that only uses a few instructions per cycle.
 

coercitiv

Diamond Member
Jan 24, 2014
3,761
3,576
136
My first thought was TR 1920x, Ryzen 1800x or SKL-X.
No they weren't, your post history on this forum says otherwise.

Your opinion on TR.
I wanted TR and was ready to buy one now that all of the stability issues have been worked out in esxi and CAD but, Intel released SKX and CFL and there is literally no reason to choose ryzen anymore unless you are on some sort of budget.

Mark my words if AMD releases a multicore CPU that's within a few percent points of Intel i would spend money on it in a heart beat but, 15-20 percent less performance per core? Noway

The only AMD product thats interesting is EPYC. I purchase 20-30 esxi hosts a year and the thought of 32core single socket 2u servers with enough lanes for 6-8 pcie cards makes me giddy.
Your opinion on 1800X.
There are very few niche cases where an 8core ryzen will beat an 8700k by enough of a margin to sacrifice all of that ST performance.
It's great that a 1800x might slightly edge out a 8700k in these niche tasks but, how many users really want that benefit at the expense of hobbling every other task they do? I was considering a Ryzen1800x for my home workstation(vSphere\HYPV) and was almost willing to put up with all of the Ryzen's issues but, then the 8700k came out.

AMD Has had 8core processors in the market for years under $100 no less and no one wanted them. Ryzen's in the current form is just a repeat of this.
So let's clean the trolling halo of this thread and discuss the only two options you were truly considering: Skylake-X versus Coffee Lake 8700K.
 

slashy16

Member
Mar 24, 2017
150
57
71
hmm , they need to make it better!

Solidwork is all about Single thread except Rendering. This link

This link shows Strong Ryzen's Performance : V-ray RT

That Lightroom's Link is all about performance between CC / 2015 Version , Some Test show weak result from New Version , like this one
even Ryzen 1800 is on par with Intel 7900X ==> Link
So Don't expect too much from Ryzen when ST / Clock / AVX 512 is on High Priority .
Looking for detail on specific applications I listed. We are reusing our existing Quadro P2000 cards and not using 3x Titan cards.
 

MajinCry

Platinum Member
Jul 28, 2015
2,486
555
136
Everything and anything uses the intel compiler,amd uses it for all of it's drivers.

Ryzen only has high IPC,which means that they need software that uses a lot of instructions per cycle to run at (close to) full potential.
Intel has spend huge amounts of money at research to make their cpus run anything well,even software that only uses a few instructions per cycle.
To the contrary, most games use some other compiler (probably Microsoft's). I gave the Intel compiler patcher a run through, and it only found three games that use it. Rome Total War (the original), Morrowind, and some 3rd party .dll in Conan Exiles.

AMD does not use it in all their drivers. It's only used in a couple:

amdhwdecoder_32.dll
amdmftdecoder_32.dll
amf-mft-decvp9-decoder32.dll
amf-mft-mjped-decoder32.dll

It's used in a couple of .dlls for Mudbox 2017, and is not used in 3DS Max 2013, fwiw.

And that low-key advertising in your last sentence; the Intel Compiler is optimized for various instruction sets. When it detects a CPUID, it makes the code take a much older SSE instruction branch. What happens if you patch the CPUID? Lo' and behold, AMD and VIA CPUs get an uplift in performance due to the compiler not automatically gimping the CPUs on purpose.

Here's what Agner had to say about it: http://www.agner.org/optimize/blog/read.php?i=49

And here's the Intel Compiler Patcher: http://www.majorgeeks.com/files/details/intel_compiler_patcher.html


So do we know that these programs the OP are talking about, use Intel's Compiler? If so, we ought to have the sites re-test after applying the patch.
 

PhonakV30

Senior member
Oct 26, 2009
970
332
136
Looking for detail on specific applications I listed. We are reusing our existing Quadro P2000 cards and not using 3x Titan cards.
It Doesn't matter.Ryzen's performance Is same.Like I said don't expect more from Ryzen when Intel has upper hand.also compare to old Intel CPU, Performance is decent, so I don't see issue.
 

The Stilt

Golden Member
Dec 5, 2015
1,709
3,057
106
To the contrary, most games use some other compiler (probably Microsoft's). I gave the Intel compiler patcher a run through, and it only found three games that use it. Rome Total War (the original), Morrowind, and some 3rd party .dll in Conan Exiles.

AMD does not use it in all their drivers. It's only used in a couple:

amdhwdecoder_32.dll
amdmftdecoder_32.dll
amf-mft-decvp9-decoder32.dll
amf-mft-mjped-decoder32.dll

It's used in a couple of .dlls for Mudbox 2017, and is not used in 3DS Max 2013, fwiw.

And that low-key advertising in your last sentence; the Intel Compiler is optimized for various instruction sets. When it detects a CPUID, it makes the code take a much older SSE instruction branch. What happens if you patch the CPUID? Lo' and behold, AMD and VIA CPUs get an uplift in performance due to the compiler not automatically gimping the CPUs on purpose.

Here's what Agner had to say about it: http://www.agner.org/optimize/blog/read.php?i=49

And here's the Intel Compiler Patcher: http://www.majorgeeks.com/files/details/intel_compiler_patcher.html


So do we know that these programs the OP are talking about, use Intel's Compiler? If so, we ought to have the sites re-test after applying the patch.
ICL hasn't been gimping non-Intel CPUs for years.
ICL also happens to be the fastest compiler for Ryzen (on average, GCC 7.1 / MSVC 2017 / ICL 2017).

If some application uses ICL from < 2011 era, then that's another story entirely.
 
  • Like
Reactions: KompuKare

MajinCry

Platinum Member
Jul 28, 2015
2,486
555
136
ICL hasn't been gimping non-Intel CPUs for years.
ICL also happens to be the fastest compiler for Ryzen (on average, GCC 7.1 / MSVC 2017 / ICL 2017).

If some application uses ICL from < 2011 era, then that's another story entirely.
Do we have any proof that the Intel Compiler doesn't do any redactedCPUID checking these days?

Great way to tell, would be to have a recent program we know that is compiled with the Intel Compiler, and have someone change the CPUID of their VIA CPU.




No profanity allowed in tech.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

UsandThem

Elite Member
Super Moderator
May 4, 2000
13,053
3,710
146
So let's clean the trolling halo of this thread and discuss the only two options you were truly considering: Skylake-X versus Coffee Lake 8700K.
BURN! lol :)
And he doesn't even address those posts......almost like they never even happened! :rolleyes: But yeah, after reading the OP, I knew this post was only for the benefit of "cherry-picking" a few results.
 

slashy16

Member
Mar 24, 2017
150
57
71
No they weren't, your post history on this forum says otherwise.

Your opinion on TR.

Your opinion on 1800X.

So let's clean the trolling halo of this thread and discuss the only two options you were truly considering: Skylake-X versus Coffee Lake 8700K.
My opinions on Ryzen/TR are different between personal use / personal work and my professional career. I'm not purchasing these machines for myself and games will not be run on them whatsoever. These processors are for professional workstations. I have had a TR machine at work since launch to test our simulation department custom applications. I would be ineffective as a systems engineer to not test new hardware. I have a low opinion of AMD because of what they did to ATI but, that doesn't mean I can't be excited about things like TR, EPYC and Ryzen mobile.

I'm generally interested in finding out if there are fixes, plugins or filters being worked on to fix the performance in these applications.

Regards,
 

Topweasel

Diamond Member
Oct 19, 2000
5,326
1,524
136
Do we have any proof that the Intel Compiler doesn't do any fucked CPUID checking these days?

Great way to tell, would be to have a recent program we know that is compiled with the Intel Compiler, and have someone change the CPUID of their VIA CPU.
Wasn't there a thing about SpecInt where it benches much higher than it performs when compiled with Intel's compiler. Like a process that mean nothing to actual CPU performance is needlessly uplifted. But it allows Intel to advertise a much larger number than anyone else could ever get on it. I know it was a big deal with EPYC because AMD posted their comparisons with an * because they reran tests on an Intel system with a different compiler (gcc I think).
 

MajinCry

Platinum Member
Jul 28, 2015
2,486
555
136
Wasn't there a thing about SpecInt where it benches much higher than it performs when compiled with Intel's compiler. Like a process that mean nothing to actual CPU performance is needlessly uplifted. But it allows Intel to advertise a much larger number than anyone else could ever get on it. I know it was a big deal with EPYC because AMD posted their comparisons with an * because they reran tests on an Intel system with a different compiler (gcc I think).
Intel's compiler does have better performance (~40%) over it's competitors, so that may very well have contributed. I tried to see if someone has run SpecInt with a VIA CPU (VIA allows you to modify the CPUID string), but I couldn't find anything.
 

coercitiv

Diamond Member
Jan 24, 2014
3,761
3,576
136
My opinions on Ryzen/TR are different between personal use / personal work and my professional career. I'm not purchasing these machines for myself and games will not be run on them whatsoever. These processors are for professional workstations.
Your other posts were clearly written with professional applications in mind as well, and the context of the discussion was clearly emphasizing various workloads, other than gaming.

List of things I've done in the past 8 hours your TR will be unable to keep up with my 6600k

Complex Office Excel
Smartplant
AutoCAD
Bentley MS
Photoshop
Acrobat

Things I could have used TR for
Vmware workstation capturing new sccm images
When it comes to mainstream Ryzen you simply discarded the entire lineup and compared it to Bulldozer based chips.
AMD Has had 8core processors in the market for years under $100 no less and no one wanted them. Ryzen's in the current form is just a repeat of this.
I don't know how strongly you can separate your personal and professional thought process when it comes to evaluating CPU performance, but I would have a lot of issues with someone recommending a certain solution for Photoshop and AutoCAD and a wildly different one for Lightroom and Revit.
 

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
20,320
8,015
136
The 1950X is in only one of the benchmarks you listed. As others have said, based on your previous posting history, this appears to be a troll thread, and I will lock it unless there is some logic and straightforwardness added.

Edit: If you look here, the 1950X blows the 7900X away for the same price. and that is just one slide. The entire review (except gaming) it rules.

https://www.anandtech.com/show/11697/the-amd-ryzen-threadripper-1950x-and-1920x-review/8
 
Last edited:

slashy16

Member
Mar 24, 2017
150
57
71
When it comes to mainstream Ryzen you simply discarded the entire lineup and compared it to Bulldozer based chips.
Actually I was replying to a ridiculous thread about 8 cores being mainstream and that we have already had 8 core cpus and nothing has shifted towards utilizating more threads in the home market. In home use anything more than 4c/8t is mostly a gimmick. - personal opinion

There is a difference between personal hobby use of professional apps and actual production work in a busiess enviorment. I edit raw photos ive taken on my canon 5dmIIi lightly and create small objects with autocad and then export to a 3dprinter for fun.

Thats very much different than using those applications to edit or render 4k images, spin 3D drawings containing thousands of moving parts and then rendering to video.

The point of this thread was to figure out if the poor performance of ryzen in these tasks would at some point be addressed or if this was what it will always be. If this is the best it can do then I need to look at the possibiltiy of 8-10c Intel workstations with 2-4 TR boxes in our comms rooms running batch jobs to do the renders(TR strength)
 

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
20,320
8,015
136
Actually I was replying to a ridiculous thread about 8 cores being mainstream and that we have already had 8 core cpus and nothing has shifted towards utilizating more threads in the home market. In home use anything more than 4c/8t is mostly a gimmick. - personal opinion

There is a difference between personal hobby use of professional apps and actual production work in a busiess enviorment. I edit raw photos ive taken on my canon 5dmIIi lightly and create small objects with autocad and then export to a 3dprinter for fun.

Thats very much different than using those applications to edit or render 4k images, spin 3D drawings containing thousands of moving parts and then rendering to video.

The point of this thread was to figure out if the poor performance of ryzen in these tasks would at some point be addressed or if this was what it will always be. If this is the best it can do then I need to look at the possibiltiy of 8-10c Intel workstations with 2-4 TR boxes in our comms rooms running batch jobs to do the renders(TR strength)
This is why this is a troll thread. It does NOT have poor performance, per my link. You cherry picked the worse benchmarks you could find, and only ONE of them even has a 1950X benchmark.
 

slashy16

Member
Mar 24, 2017
150
57
71
This is why this is a troll thread. It does NOT have poor performance, per my link. You cherry picked the worse benchmarks you could find, and only ONE of them even has a 1950X benchmark.
Please look the benchmarks on the site as well as what I linked. In Solidworks for example in tasks which look like they only use 2-6 threads, Ryzen is 25-35% behind. This seems excessive and what I would deem poor performance. It makes up ground in the render tests. This brings me back to the whole point of this thread and if we can expect improvements. That's why I am leaning towards 8700k machines right now with some TR boxes for remote rendering.

I have a thread asking questions related to the performance of CPU's I am considering. I'm not here to make you feel good about your CPU purchase.
Your feelings about me or my past posts have nothing to do with this thread and are darn near personal attacks.
 

The Stilt

Golden Member
Dec 5, 2015
1,709
3,057
106
Do we have any proof that the Intel Compiler doesn't do any fucked CPUID checking these days?

Great way to tell, would be to have a recent program we know that is compiled with the Intel Compiler, and have someone change the CPUID of their VIA CPU.
It does check the CPUID regardless what compiling settings are used (common arch, additional Intel µarch tune i.e. Qax, or Intel µarch only tune i.e Qx), that part is integral.

Based on my experience on Ryzen and Intel Haswell and newer, there is no clear difference between the common option (/arch CORE-AVX2) and the Intel specific additional (/QaxCORE-AVX2) option.
The advantage of using the latter one is the fact that it is auto dispatched, meaning you can use the same binary for P4 and Skylake-X whereas the common binary would only run on CPUs with AVX2 and FMA3 support. The Intel only (Qx) option won't run on non-Intel CPUs without patching.
 

UsandThem

Elite Member
Super Moderator
May 4, 2000
13,053
3,710
146
I have a thread asking questions related to the performance of CPU's I am considering. I'm not here to make you feel good about your CPU purchase.
Your feelings about me or my past posts have nothing to do with this thread and are darn near personal attacks.
Leave him alone Markfw, you bully! ;)

I have a sneaking suspicion this one's getting locked (as it should be).
 

MajinCry

Platinum Member
Jul 28, 2015
2,486
555
136
It does check the CPUID regardless what compiling settings are used (common arch, additional Intel µarch tune i.e. Qax, or Intel µarch only tune i.e Qx), that part is integral.

Based on my experience on Ryzen and Intel Haswell and newer, there is no clear difference between the common option (/arch CORE-AVX2) and the Intel specific additional (/QaxCORE-AVX2) option.
The advantage of using the latter one is the fact that it is auto dispatched, meaning you can use the same binary for P4 and Skylake-X whereas the common binary would only run on CPUs with AVX2 and FMA3 support. The Intel only (Qx) option won't run on non-Intel CPUs without patching.
Ya ya, but with VIA, we'd be able to tell if the CPUID check is the same as it was in all of Intel's previous compilers; that gimped code is fed to non-Intel CPUs.
 

The Stilt

Golden Member
Dec 5, 2015
1,709
3,057
106
Ya ya, but with VIA, we'd be able to tell if the CPUID check is the same as it was in all of Intel's previous compilers; that gimped code is fed to non-Intel CPUs.
Same exact thing can be achieved through patching the compiled binaries.
You need to change three instructions (cmp to test), that's all.

No need to make my word for it, you can try it yourself (trials for Intel Compiler 2018 are available) ;)
 

USER8000

Golden Member
Jun 23, 2012
1,504
708
136
This is why this is a troll thread. It does NOT have poor performance, per my link. You cherry picked the worse benchmarks you could find, and only ONE of them even has a 1950X benchmark.
Even for Lightroom and DXO Optics Pro there are reviews from websites like Hardware.fr which show Ryzen and Threadripper as being very competitive at their price points:

http://www.hardware.fr/articles/970-11/traitement-photos-lightroom-dxo-optics-pro.html

http://www.hardware.fr/articles/967-12/traitement-photos-lightroom-dxo-optics-pro.html

Look at the compilation and video encoding scores too.
 

ASK THE COMMUNITY