Question Performance hit going from 8700k to 3900x with same GPU

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

viivo

Diamond Member
May 4, 2002
3,344
32
91
This relates strictly to gaming performance, not productivity and other tasks.

Most of my gaming time is spent playing an MMO - FFXIV - which isn't exactly a demanding game, so I’m quite surprised that my framerate has become unplayable. With an 8700k and the same 5700 XT I was able to maintain 100FPS nearly 100% of the time, but since switching to the 3900X my FPS maxes out in the mid-70s to low 80s with frequent drops and spikes.

Windows 10 x64 clean install, all latest drivers and BIOS. CPU and GPU are properly cooled with third-party solutions and neither is entering throttling. FreeSync is not an option due to my monitor’s nauseating amount of flashing and flickering.

I doubt this is normal. Anyone know a potential cause?

Specs:
3900x
ASYS TUF X570
32GB C16 3200 @ 3200 / fclk 1600
5700 XT, 2560x1440 @ 100hz
OS and games on separate NVMe drives
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Yeah, I've always found it strange that console ports to PC prefer Intel/NVIDIA when the base game was developed for AMD/AMD hardware. Maybe the different ports of game engines per system are much different than I have always thought.
I may be completely wrong here, but I'm guessing that is because when they do the port to PC, they realize that there are considerably more nVidia cards out there, and until fairly recently, there were a lot more Intel CPUs being used for gaming. I'm sure in another year or two, they may very well stop the practice of making the ports be designed to perform better on Intel CPUs (to the extent that is possible), with as popular as Ryzens have been in the last year especially. They likely won't stop optimizing the ports for nVidia GPUs, I'm guessing, unless something drastic happens in the GPU space. I am not at all an expert in this type of thing, BTW.
 

Hitman928

Diamond Member
Apr 15, 2012
5,244
7,793
136
I may be completely wrong here, but I'm guessing that is because when they do the port to PC, they realize that there are considerably more nVidia cards out there, and until fairly recently, there were a lot more Intel CPUs being used for gaming. I'm sure in another year or two, they may very well stop the practice of making the ports be designed to perform better on Intel CPUs (to the extent that is possible), with as popular as Ryzens have been in the last year especially. They likely won't stop optimizing the ports for nVidia GPUs, I'm guessing, unless something drastic happens in the GPU space. I am not at all an expert in this type of thing, BTW.

The Nvidia optimizations happen more because they give studios a lot more resources to make them happen than AMD does. Even if AMD got to 50+% market share, that probably wouldn't change too much unless AMD starts to provide the same.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,327
10,035
126
The Nvidia optimizations happen more because they give studios a lot more resources to make them happen than AMD does. Even if AMD got to 50+% market share, that probably wouldn't change too much unless AMD starts to provide the same.
Or maybe, it's just a property of the overall higher-level engine? Like UnrealEngine, maybe the console version is optimized for the consoles, and the PC version is optimized more for Nvidia cards. Kind of counter-intuitive, but it's possible that the engine-level optimization work is done in proportion to the market-share of hardware on each platform, irregardless of how similar the hardware may be between platforms, versus other vendor's GPUs.
 

DrMrLordX

Lifer
Apr 27, 2000
21,620
10,829
136
Yeah, I've always found it strange that console ports to PC prefer Intel/NVIDIA when the base game was developed for AMD/AMD hardware. Maybe the different ports of game engines per system are much different than I have always thought.

FFXIV is a Gameworks title on PC.
 
  • Like
Reactions: Mk pt

viivo

Diamond Member
May 4, 2002
3,344
32
91
Unfortunately the GSkill was a defective DIMM. One of them worked fine. Too bad because I would have rather kept it than this cheap and fragile feeling Patriot.

A follow-up question: In comparing two CPUs, are there things other than voltage, boost clocks and number of cores maintaining boost simultaneously to look for in order to determine the "better" CPU? I know any differences are marginal and likely would not be noticeable in every day use, but as I need only one I'd like to keep the better chip no matter how slight the advantage.

For example, would comparing average clocks across all cores in HWINFO64 at the end of a day be enough to start? Can the memory controller vary wildy? I'd rather keep the one that can better handle memory overclocking than the one with slightly higher clock speeds if it comes down to that.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,620
10,829
136
Memory controller can vary, but usually not by that much. I haven't seen any good information on how much they vary with Matisse. The bigger variable seems to be in how high you can get IF to run. Some Matisse CPUs will hit 1900 MHz IF and some won't. The XT release allegedly allows higher IF speeds.

The other variable is in FIT limits. Some Matisses CPUs will allow higher voltages as a part of the FIT tables than others. That just limits how many volts the CPU will allow based on current level/power draw/temp. Basically. It's more complicated than that but I am not an expert on FIT tables.