Discussion Core2Quad performance in more recentish games vs Bloomfield

loki1944

Member
Apr 23, 2020
99
35
51
So, I'm testing my Core2Quad Q9650@4.1Ghz (FSB@1825Mhz) with 16GB of DDR3 RAM@1800Mhz to see how it fares in various games as an ultra budget type assessment; have been doing the same with my Bloomfield and Gulftown systems (i7 920@3.8/960@4.3/980X@4.5). The Bloomfield/Gulftown CPUs smash through everything extremely well; often close to my other, newer systems (Ryzen 2600X/i7 6850K@4.5/i7 6700/i5 6600K@4.1/i7 7800X@4.8) if all are paired with a 980Ti.

I'm wondering if anyone knows why for the Core2Quad in so many games, of the more recent type, the FPS will be ~21-27 regardless of settings while in others the FPS will be pretty decent? For example; Shadow of Mordor and Strange Brigade on ultra settings will get 90+FPS @1080p with the 9650 paired with a 1060 6GB; Destiny 2/Medium, Tannenberg/Medium, Vermintide/High, and Just Cause 3/Very High play decently well at 52, 37, 36, and 37FPS, respectively. Now if I try to run Ghost Recon Wildlands, The Division 2, or Hitman 2016, now I can't break that 21-27 FPS regardless of settings.

I understand the C2Q is old, but it does have 4C/4T and my i5 6600K stomps it here. Is it an L2/3 cache thing, instruction set thing or what. I'm just a bit fascinated by this and was wondering if someone knew why exactly?
 
Last edited:

teejee

Senior member
Jul 4, 2013
361
199
116
A lot of extensions came after Core:



So one or some of these could be the cause. SSE4.1/4.2 or AVX maybe?
 

SPBHM

Diamond Member
Sep 12, 2012
5,056
409
126
it can only be the poor memory/fsb performance, the instructions support on Nehalem to 45nm c2q is not that different

core 2 quad was 2 core 2 duos sharing the Pentium 4 style (at much higher clocks) FSB to connect to the northbridge, at some point that has to be a major bottleneck.
 
  • Like
Reactions: loki1944

loki1944

Member
Apr 23, 2020
99
35
51
it can only be the poor memory/fsb performance, the instructions support on Nehalem to 45nm c2q is not that different

core 2 quad was 2 core 2 duos sharing the Pentium 4 style (at much higher clocks) FSB to connect to the northbridge, at some point that has to be a major bottleneck.

Ah ok, that makes sense, though games like Shadow of Mordor, GTA V, Strange Brigade, Doom 2016 seem to account for this in some fashion because performance is still pretty good.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,092
1,065
136
Good god, a GTX 460 will easily bottleneck a Core2Quad. Though your C2Q is impressive @ 4.1ghz. I have a Q6600 that ran daily @ 3.6ghz. They are ancient and far behind the times today. My only fond memory is that I had 8GB of DDR2 1000mhz memory back in 2008.
 
  • Like
Reactions: lightmanek

loki1944

Member
Apr 23, 2020
99
35
51
Good god, a GTX 460 will easily bottleneck a Core2Quad. Though your C2Q is impressive @ 4.1ghz. I have a Q6600 that ran daily @ 3.6ghz. They are ancient and far behind the times today. My only fond memory is that I had 8GB of DDR2 1000mhz memory back in 2008.

Surprisingly in quite a few games performance is quite good @1080p....but when it's bad it's real bad. In can play For Honor with high at great fps (80s average) relatively smoothly, but even on low Wildands is a mess. The more open world games with detailed environments hit it hardest. Mad Max is the exception, but while large in size, it is relatively sparse environment/npc wise.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,226
9,990
126
Are we all forgetting that the Core2Quad does NOT have an IMC, like the AMD Phenom / Phenom II and Nehalem and Sandy Bridge, to name a few more-or-less contemporary similar CPUs.

Core2Quad were two dual-cores, MCM'ed together, over FSB protocol used since Pentium 4, and the memory controller was in the Northbridge chipset, along with PCI-E. Two fairly significant bottlenecks.

Edit: Don't get me wrong, they were a pretty large step forward in performance at the time. But they can't hold a candle to, or compete with, modern CPUs/platforms, that utilize IMCs and direct PCI-E links between the CPU and the GPU.
 

BigDaveX

Senior member
Jun 12, 2014
440
216
116
Intel more than likely made a bunch of compromises in Core 2's design that weren't readily obvious at the time, but which we're really seeing now. If memory serves, a lot of the chip's performance enhancing features only work with 32-bit code, possibly due to it originally being designed as a mobile-only processor and/or Intel having worked out that we'd be on Nehalem or Sandy Bridge by the time 64-bit code really became the standard.

Would be interesting to see a first-gen Phenom benched against a Core 2 with modern benchmarks. It wouldn't surprise me if it turned out that K10 really was the better (or at least more future-proof) design compared to Conroe all along, but got screwed over by AMD's 65nm process, along with blunders like the TLB bug.
 
  • Like
Reactions: loki1944

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
Intel more than likely made a bunch of compromises in Core 2's design that weren't readily obvious at the time, but which we're really seeing now. If memory serves, a lot of the chip's performance enhancing features only work with 32-bit code, possibly due to it originally being designed as a mobile-only processor and/or Intel having worked out that we'd be on Nehalem or Sandy Bridge by the time 64-bit code really became the standard.

Would be interesting to see a first-gen Phenom benched against a Core 2 with modern benchmarks. It wouldn't surprise me if it turned out that K10 really was the better (or at least more future-proof) design compared to Conroe all along, but got screwed over by AMD's 65nm process, along with blunders like the TLB bug.

Phenom was also stuck with a tiny L3 cache. I think that would be a big limiting factor today. Phenom II had triple the L3 (along with working TLB!), and would probably hold up much better.
 

BigDaveX

Senior member
Jun 12, 2014
440
216
116
Phenom was also stuck with a tiny L3 cache. I think that would be a big limiting factor today. Phenom II had triple the L3 (along with working TLB!), and would probably hold up much better.
True, but Phenom I has an integrated memory controller, which Conroe lacks. I don't doubt that a Phenom I would lose to Penryn, which had a larger L2 cache and various other improvements, but I think a contest with Conroe would be a lot more interesting.
 
  • Like
Reactions: loki1944

loki1944

Member
Apr 23, 2020
99
35
51
The other thing I'm noticing so far is some interesting bottlenecking between my GTX 780 3GB and my GTX 1060 6GB; the 780 is performing as well or better than the 1060 6GB in most cases.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
Are we all forgetting that the Core2Quad does NOT have an IMC, like the AMD Phenom / Phenom II and Nehalem and Sandy Bridge, to name a few more-or-less contemporary similar CPUs.

Core2Quad were two dual-cores, MCM'ed together, over FSB protocol used since Pentium 4, and the memory controller was in the Northbridge chipset, along with PCI-E. Two fairly significant bottlenecks.

Edit: Don't get me wrong, they were a pretty large step forward in performance at the time. But they can't hold a candle to, or compete with, modern CPUs/platforms, that utilize IMCs and direct PCI-E links between the CPU and the GPU.

The C2D/Qs use a rather large amount of on-chip cache (for their time) to hide their memory latency. They then rely on prefetchers to put the most used data into that cache. Coupled with the MCM/FSB design it really shows today.

I'm still amazed my 3600 has 32MB L3 cache. That's more then my first PC had for main memory.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,717
7,014
136
As others have indicated, I suspect the dual dualcore design of C2Q processors and the lack of an IMC really kills it in some games.

Tons of L2 cache then is not necessarily a ton of L2 cache now and if a game's thread has to communicate with another core on the opposite die it has to go all the way through a (now) very slow FSB all the way back to the other core and back again.

I suspect some games are programmed in such a way that intercore communication is not a limiting factor while in other games it is.
 

Hi-Fi Man

Senior member
Oct 19, 2013
601
120
106
Honestly I'm not sure it's the lack of IMC. If we look at K10, the performance is similar I believe to Core 2 in games if not slightly slower. K10 also lacks SSSE3 and SSE4.1 which has prevented some more recent games from even running at all because they take advantage of SSSE3 and sometimes SSE4.1. Since K10 lacks these instructions the fallback has usually been SSE2 which I think somewhat negates the IMC benefits. In addition, the cache (especially L3) is relatively slow on K10. Core 2 also has this massive cache (it's still large today if we look at it per core) and it's the same latency as K10's L2 which is much smaller.

Someone ought to do some game benchmarks head to head with K10 and see. Last time I tested, Core 2 was still ahead but that was at least 5 years ago.
 

loki1944

Member
Apr 23, 2020
99
35
51
Honestly I'm not sure it's the lack of IMC. If we look at K10, the performance is similar I believe to Core 2 in games if not slightly slower. K10 also lacks SSSE3 and SSE4.1 which has prevented some more recent games from even running at all because they take advantage of SSSE3 and sometimes SSE4.1. Since K10 lacks these instructions the fallback has usually been SSE2 which I think somewhat negates the IMC benefits. In addition, the cache (especially L3) is relatively slow on K10. Core 2 also has this massive cache (it's still large today if we look at it per core) and it's the same latency as K10's L2 which is much smaller.

Someone ought to do some game benchmarks head to head with K10 and see. Last time I tested, Core 2 was still ahead but that was at least 5 years ago.

I would accept your challenge but my wife might assassinate me if I buy any more computer hardware this year.
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
I did briefly have a pair of systems at home that were in this generation. I had a 1090t and a C2Q 8400. In day to day tasks, there really wasn't much of a difference. In most games though, the Phenom was noticeably faster, even though both had the same video cards. Now, I don't know if changing the 8400 for a 9550 would have made any difference. the 9550 was both faster and had four times as much cache.
 
  • Like
Reactions: loki1944

loki1944

Member
Apr 23, 2020
99
35
51
I did briefly have a pair of systems at home that were in this generation. I had a 1090t and a C2Q 8400. In day to day tasks, there really wasn't much of a difference. In most games though, the Phenom was noticeably faster, even though both had the same video cards. Now, I don't know if changing the 8400 for a 9550 would have made any difference. the 9550 was both faster and had four times as much cache.

I did some testing with and without O/C (3Ghz vs 4.1Ghz) with a 1050Ti on the Q9650 and the difference in performance was mostly very minor in games so far; I haven't compared something more relevant to its release like Crysis 1, but plan to as well. GPU pairing seems like a mixed bag/wild card in terms of performance; i.e. in some cases a 1060 6GB provides clear FPS increase, but in others even a GTX 780 outperforms it. I do have a Q6600 as well, so might try that one next to see how much the difference is. I also strongly recommend no one ever buy an XFX 790i SLI board for 775; it is a nightmare.
 
Last edited:

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
28,274
19,921
146
Honestly I'm not sure it's the lack of IMC. If we look at K10, the performance is similar I believe to Core 2 in games if not slightly slower. K10 also lacks SSSE3 and SSE4.1 which has prevented some more recent games from even running at all because they take advantage of SSSE3 and sometimes SSE4.1. Since K10 lacks these instructions the fallback has usually been SSE2 which I think somewhat negates the IMC benefits. In addition, the cache (especially L3) is relatively slow on K10. Core 2 also has this massive cache (it's still large today if we look at it per core) and it's the same latency as K10's L2 which is much smaller.

Someone ought to do some game benchmarks head to head with K10 and see. Last time I tested, Core 2 was still ahead but that was at least 5 years ago.
You can get most of those games to run by using the Intel emulator https://software.intel.com/en-us/articles/intel-software-development-emulator
 

loki1944

Member
Apr 23, 2020
99
35
51
So, from my foxhole this is what I'm seeing so far performance wise:

Older Games

Crysis Assault.PNG
ClearSky.PNG
CoP DX10.PNG
CoP DX11.PNG
FarCry 2.PNG
Shogun 2.PNG

And for newer games:

Shadow of Mordor.PNG
GTA V.PNG
Hitman 2016.PNG
Shadow of War.PNG
 
Last edited:

loki1944

Member
Apr 23, 2020
99
35
51
Newer Games continued:
Strange Brigade.PNG
Road Redemption.PNG
3K Total War.PNG
TW3.PNG
Vermintide 2.PNG
Borderlands 3 DX11.PNG
 
Last edited:
  • Like
Reactions: DAPUNISHER

loki1944

Member
Apr 23, 2020
99
35
51
As far as open world games: Shadow of Mordor and Shadow of War seem to scale pretty well; GTA V and the Witcher 3 are so-so, Borderlands 3 is ok, and Hitman 2016 (and Hitman 2) is really almost unplayable 3 with aggressive hitching.

All the games that hit a wall pretty much do so in the same manner regardless of GPU on the Q9650; some stuttering and hovering around the mid to lower 30s FPS. In the case of Borderlands 3; while playable, note the driver unsupported GTX 580 barely behind the GTX 1050Ti in DX11.

Do graphics drivers include CPU optimizations as well, or is that on the game developer for the most part?
 
Last edited:

Hi-Fi Man

Senior member
Oct 19, 2013
601
120
106
Please fix the scaling on those graphs; in particular vermintide 2 and hitman 2 are very misleading. Some of these graphs can lead the viewer to get the wrong idea when in fact the performance is the same (within margin of error).

The CPU is bottlenecking pretty hard in some newer games but performance is still above consoles. If possible, a frame time graph would be nice.