Do high end user use AMD instead of Intel?

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
What. Seriously. I read that three times sober and I'm still lost.

Well,there's your problem...

What is there not to understand?
AMD went with server design on their FX line where all you really need is integer,it's "fetch this data from databaseX please" all the day long,so they went with 8 separate integer cores each one able to execute several commands each clock cycle,but the floating point units are a whole mess,
there are only four of them to begin with,they can be split up into eight units but they have to get the data from the first thread in one cycle the data from the second thread in the next cycle and start computing on the third cycle,whatever you do floating point calculations will be sub par.

So of course now the computing industry has to accommodate towards this stupid design if it wants to be fair...


Troll logic 101!
 

DrMrLordX

Lifer
Apr 27, 2000
22,939
13,024
136
Awesome help guys!

I decided to go with the i5 4690K since it seem to be the best for my build. My last question is this. I plan on playing and working in 5760 x 1080p resolution. When It come to work, I want to be able to open as many browser tab as I can and be able to open as many programs as I can. I won't always be gaming in 5760 x 1080p when I'm multi-tasking. I would have one screen with a game I'm playing (Witcher 3), the 2nd screen is for overclocking and monitoring while my last screen will be use for internet surfing, music, movies, and documents.

Would an i5 be enough or do I need an i7?

In general, the i7 will give you a little bit more oomph. That is, if we're talking 4790k here. Higher base clockspeed, higher turbo clock, and HT are worth it if you're going to keep that chip for awhile. Of course you're overclocking, so maybe only the HT will be a factor.

The issue of whether or not you'll be able to run a lot of software simultaneously and drive multiple monitors while doing so has a lot to do with the storage, memory, and video card choices. Be sure to use an SSD for speed, make sure you are at least using DDR3-2400 (if not faster), and consult with the Video forum when it comes to multimonitor support.
 

TeknoBug

Platinum Member
Oct 2, 2013
2,084
31
91
quicksync can not provide the same quality AT THE SAME BITRATE as the classic, software only, x264 path.
If you raise the bitrate you can get excellent quality video, just bigger in size,for the consumer who is transcoding the latest episode to watch on his tv just to delete it afterwards, this doesn't really matter.

For the "serious" collector that needs to have every movie (ever made) on his drive the size difference will become noticeable.

And yes I do use it all the time for screen capture with obs.
You can tell the difference in quality? I switched from AMD APP and NVidia CUDA to Quicksync to take a load off the GPU's when recording and I see no difference? I use Mirillis Action and Bandicam, maybe OBS is the problem?
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Which is exactly what I just said. Thanks for repeating it.

No, it isn't... You made it seem like it was a 50/50 split and you made it seem like where the FX wins, it wins by a mile. It's not a 50/50 split, and when FX wins it's by a tiny amount. So i5 victories are often and by large amounts, FX victories are few by most of the time by infinitesimal amounts. That isn't what you said. Now you know, you're welcome.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
quicksync can not provide the same quality AT THE SAME BITRATE as the classic, software only, x264 path.
If you raise the bitrate you can get excellent quality video, just bigger in size,for the consumer who is transcoding the latest episode to watch on his tv just to delete it afterwards, this doesn't really matter.

For the "serious" collector that needs to have every movie (ever made) on his drive the size difference will become noticeable.

And yes I do use it all the time for screen capture with obs.

Bitrate bloat using Quicksync has come to be less of an issue with the latest releases of Handbrake. Make sure you are also using the latest drivers, Intel has been continuing to improve the quality of Quicksyncs output.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
You can tell the difference in quality? I switched from AMD APP and NVidia CUDA to Quicksync to take a load off the GPU's when recording and I see no difference? I use Mirillis Action and Bandicam, maybe OBS is the problem?

I can tell the difference,I convert movies (once in a while) and with x264 I can use 1500kbit and it looks good (small monitor) with quicksync I have to go higher for the same results,at 1500 you see big fat blocks.

Obviously if you use a program that records with very high bandwidth (lots of GBs being written) from the get go you will never notice any difference.

And as I said from the beginning, for the consumer there is no difference that he or she will notice,consumers have no idea about how big a file should be all they see is the final result,and that is ultra fast conversions with great quality.


No obs is not the problem ,I record at 6000kbit variable (with 15000 max) and the quality is great.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
Bitrate bloat using Quicksync has come to be less of an issue with the latest releases of Handbrake. Make sure you are also using the latest drivers, Intel has been continuing to improve the quality of Quicksyncs output.
It's not about me,I am very satisfied with quicksync.
It's about the reviews of quicksync,all of them made it out to look as if quicksync was sub par,thus you get comments like the one from atten-ra.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,269
16,120
136
It seriously depends on the workload. As I have mentioned previously, in heavy multi-threaded workloads.... An Octocore FX will mop the floor with an i5 in scientific computing, rendering, etc. The FX will indeed get owned by an i5 in anything single threaded.

It just comes down to if the software can push all eight threads. Same thing for i7 versus the i5. Only software that can use all 8+ threads will demonstrate any performance advantage of the i7 versus the i5.

Cinebench is rendering, correct ? Here:
http://anandtech.com/bench/product/1289?vs=1261
A an FX-9590 gets killed by an I5 4690k. So how is the FX faster in multi-threaded ? Is this a "it wins one bench that I can find" garbage ?

Please
 

brandonb

Diamond Member
Oct 17, 2006
3,731
2
0
I know the OP has probably long left this thread. But I generally avoid AMD because of one main thing. Their motherboards are usually very unstable. I've never had one either prebuilt (gateway computers) or self built that ever lasted the test of time. I've had friends have blown out MB's within a year or two very frequently. I've seen it happen with Intel but its a lot more rare. Also my workplace bought AMD (for cost purposes) and bought like 40 AMD machines (this was years ago now.) Within a year, I believe only 25% survived. They had to stop buying them because the IT crew had to keep repairing them (getting replacement parts/filling out warranty tickets) and it became unmanageable.

AMD APU has its niche places, they would be great for HTPCs and such. But personally I do not want to deal with rebuilding it that often due to faulty motherboards.

I don't know what it is. I've never seen any multi year successful AMD setups. They should not license out to cheap manufacturers, I believe it does more harm than good to their company image, even though they do make a few $$$ at it. It hurts them in the long run.

(I know I'll get bashed for this post, but its just my experience, be nice to me!)
 

TeknoBug

Platinum Member
Oct 2, 2013
2,084
31
91
Yes Cinebench is Intel biased which is why I usually ignore them, however even on benchmarks that isn't Cinebench, FX doesn't win by a mile.
 

MiddleOfTheRoad

Golden Member
Aug 6, 2014
1,123
5
0
No, it isn't... You made it seem like it was a 50/50 split and you made it seem like where the FX wins, it wins by a mile.

In scientific integer computing -- it does win by a mile.
Twice the threads = twice the workload.

Even processing on slower AMD cores, an i5 can't keep up with the workload of an FX octocore (4 projects vs 8 projects simultaneously).
But there are limits, no FX can touch Haswell i7's -- which is why I now run 2 i7 4790K's on WCG.

But i7 > FX Octocore > i5 on integer computing by a pretty substantial margin
 
Last edited:

MiddleOfTheRoad

Golden Member
Aug 6, 2014
1,123
5
0
Cinebench is rendering, correct ? Here:
http://anandtech.com/bench/product/1289?vs=1261
A an FX-9590 gets killed by an I5 4690k. So how is the FX faster in multi-threaded ? Is this a "it wins one bench that I can find" garbage ?

Please

Cinebench like many benchmarks is complete bullshit.

This is one area that generally AMD and Nvidia agree:
http://www.cnet.com/news/amd-quits-benchmark-group-implying-intel-bias/

Nobody that I know in the film industry actually uses Cinebench. The vast majority run Maya, Adobe, Avid or Apple software.

It's totally shocking to me that Cinebench (heavy sarcasm), a software written in ICC (Intel C++ Compiler) runs poorly on AMD cpu's. Give me a
break, dude. PURE 100% BIAS. How about some PhyX benchmarks on Radeons while you're at it.

An accurate benchmark would be one that isn't optimized for any specific manufacturer's architecture.... So Cinebench should be thrown out the window on those grounds alone.

Linux is probably cleaner for unbiased benchmarks (more generic compilers) -- where the FX-8350 outpaced an i7 3770K for most tests:
http://www.phoronix.com/scan.php?page=article&item=amd_fx8350_visherabdver2&num=4

BTW, one of Hollywood's biggest render farms is 100% AMD Opteron powered. They mostly run Maya (like the rest of Hollywood). AMD CPU's are beasts at rendering when they aren't artificially gimped by Intel compilers.....
http://www.rendersolve.com/
 
Last edited:

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Cinebench like many benchmarks is complete bullshit.
This is one area that generally AMD and Nvidia agree:

Dear AMD,
Using Cinebench to show the performance of your chips is complete BS.

Slide%2012%20-%2015W%20Optimized_575px.png
 
Last edited:

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
An accurate benchmark would be one that isn't optimized for any specific manufacturer's architecture.... So Cinebench should be thrown out the window on those grounds alone.

Lol you serious?
99% of software out there is compiled with the intel compiler
(not to mention that 99% of software uses much less than 8 threads)

If you want benches that are representative to everyday use than they have to be compiled with the most common compiler.

Saying that you have to use a public domain compiler that is so crappy it has no optimizations at all...well that's just sad.
 

MiddleOfTheRoad

Golden Member
Aug 6, 2014
1,123
5
0
Saying that you have to use a public domain compiler that is so crappy it has no optimizations at all...well that's just sad.

It would be a far more accurate reflection of the actual hardware performance being benchmarked.

It's also why AMD CPU's are just as fast as their contemporary Intel CPU rivals under Linux (FX 8350 is just as fast as an i7 3770k in Linux -- but artificially gimped to roughly i3 performance under most Windows apps). IMO, that's all because of the Intel Compiler. Software is clearly artificially holding back the hardware performance IMO. The erratic optimizations are why I mostly dumped Windows and moved to Linux..... And I run about 70% Intel stuff right now. Although, I wouldn't advise it for people who have Radeons -- their Linux drivers are still disappointing. Nvidia definitely is a great way to roll under Linux.
 
Last edited:

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
It would be a far more accurate reflection of the actual hardware performance being benchmarked.

Which is exactly what the benchmarks do.

You just don't like the results.

It's up to AMD to make CPU's that run software well, it's not up to software devs to make their software run well on niche CPUs.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Yeah, because AMD has a brilliant marketing team.

Doesn't change the fact that Cinebench is a wildly inconsistent and misleading benchmark.

So then you saying AMD is misleading consumers by showcasing their Cinebench performance.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
I normally only buy AMD for APU's, to build nice and quiet HTPC's capable of handling most tasks well. I recently bought a AMD 8 Core 8300 CPU as I was able to get it for dirt cheap (100 bucks) as well as a Gigabyte motherboard for $17.00 after rebate (already came). I know this isn't the norm price wise in most parts of the world but in this price range they have Intel beat handedly IMO. When spending more than $150 on a dedicated CPU I automatically switch to Intel as AMD's top end 8 core CPU's really are not much better than the entry level 8 core chips (both overclock within 300-400 mhz of each other).

Intel all the way after this price point.
 

Azuma Hazuki

Golden Member
Jun 18, 2012
1,532
866
131
This is one reason I tend to favor AMD when the deals are good :wub:

My specific use case is massive Gentoo compiles. While I'd still rather have a 4790K over a 9590, for a budget the 83xx series are just fine for me.