Crysis CPU performance: E6850 vs QX6850 vs QX9650 vs Phenom X4

harpoon84

Golden Member
Jul 16, 2006
1,084
0
0
OK, good news first - Phenom overclocks to 3GHz on the B2 stepping - yay! :)

Now to the bad news - Performance is poor, 5 - 10% slower than C2D/C2Q. :(

http://news.expreview.com/2007...9/1193590532d6599.html
http://www.expreview.com/img/news/071028/score.png

Specs:
C2D/C2Q using P35 mobo/ 8800GTX / WinXP / Forceware 169.01
Phenom X4 using RD790 mobo/ 8800GTX / WinXP / Forceware 169.01

I have to say this is pretty disappointing, having a heavily overclocked Phenom fail to match a stock C2D/C2Q. Consdering Penryns are routinely overclocking to 4GHz+ on air... things ain't looking too pretty for AMD.

Also interesting (and somewhat disappointing) is that Penryn shows almost NO performance gain over Conroe/Kentsfield. Quad core is also showing negligible gains over dual core, but apparently the Crysis SP demo is not multithreaded?
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Originally posted by: harpoon84
Also interesting (and somewhat disappointing) is that Penryn shows almost NO performance gain over Conroe/Kentsfield.

That doesn't surprise me, it's just a die shrink, but with SSE4 thrown in for all of those video people.

Quad core is also showing negligible gains over dual core, but apparently the Crysis SP demo is not multithreaded?

Seeing who's releasing it, I wasn't really holding my breath about it being quad-enabled. Maybe the retail version will be, after the 5th patch.;)
 

harpoon84

Golden Member
Jul 16, 2006
1,084
0
0
Originally posted by: myocardia
That doesn't surprise me, it's just a die shrink, but with SSE4 thrown in for all of those video people.

In all other games Penryn has shown a ~5% gain, sometimes even 10% - 20%, so yeah, I was expecting SOME improvement!

Seeing who's releasing it, I wasn't really holding my breath about it being quad-enabled. Maybe the retail version will be, after the 5th patch.;)

LOL! I just find it odd that after all the hype about Crysis and multithreading and all the hoopla about QC, it ends up having almost no performance benefit whatsover.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I think its possible the lack of improvements is because the version tested might not be the final version yet. I thought I heard something like that, but if I am wrong correct me.

Since UT3 gets decent improvement from single to dual to quad, I can't see why a more advanced game like Crysis can't....
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Well, I think one of the reasons is probably nothing more than making the game gpu-limited, using in-game settings, plus AA. I mean, I couldn't read that article, since it was in Chinese, but that's what alot of sites have been doing lately, including this one.

edit: Of couse, this site started it, but alot of other sites have started doing the same thing.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
10% performance lag for Phenom in this game means much more than 10% elsewhere. As anybody would know, barely any applications scale close to 100% performance with clock speed increase. This puts Phenom in the E6600 range(if Crysis doesn't benefit from quad core), or a Q6600. From the first preliminary review that indicated that 20% performance advantage over Opteron in games is basically misleading information because Opteron is dual socket and loses performance in games, Phenom is looking very unimpressive.

This is why the much talked about 3.33GHz Yorkfield/Wolfdale is nowhere and the clock speed is 10% off. Power consumption also indicates Intel might be able to reach a higher clocked part that actually lives up to the 130W TDP.
 

bradley

Diamond Member
Jan 9, 2000
3,671
2
81
You can tell all of that from some half-baked demo? Where are the authentic benchmarks, like 3dmark06 for instance? The bus and memory playing field isn't even completely level in these tests. Who's to say these results are even genuine?
 

manowar821

Diamond Member
Mar 1, 2007
6,063
0
0
Okay, you have to take into account the Phenom is going to be far cheaper than that QX9650, and Phenom will be available in factory clocked versions of 2.8-3.0 and higher, very early 2008.

Also, the FPS difference between the two chips isn't that big, and imagine a stock 3.0ghz x4 running at 3.6ghz... For much cheaper than the QX9650. Yeah.
 

harpoon84

Golden Member
Jul 16, 2006
1,084
0
0
Originally posted by: manowar821
Okay, you have to take into account the Phenom is going to be far cheaper than that QX9650, and Phenom will be available in factory clocked versions of 2.8-3.0 and higher, very early 2008.

Also, the FPS difference between the two chips isn't that big, and imagine a stock 3.0ghz x4 running at 3.6ghz... For much cheaper than the QX9650. Yeah.

You're kidding right?

There are also far cheaper Intel quads such as the $266 Q6600 that is routinely overclocking to 3.5GHz+, what is your point?

The upcoming Q9450 will most likely hit 4GHz too, mobo permitting.

I'd also like to know where you heard Phenom was being released at 2.8 - 3GHz 'and higher' in 'very early 2008'. So you're saying we'll see 3GHz+ Phenoms by Jan 2008? Err... I don't think so! It took them over 1.5V to hit 3GHz on the B2 stepping, so unless you are talking a miracle stepping not dissimilar to the one that got Charlie from TheINQ 'dancing in the aisles' it just ain't gonna happen. :p

As for a stock 3GHz X4 overclocking to 3.6GHz, hey we can always dream. ;)
 

Lonyo

Lifer
Aug 10, 2002
21,939
6
81
The Crysis demo does work with dual cores.
When I ran it, it had my CPU at about 75% use (X2 4200+) with about 75% load on each core (neither was maxxed out oddly enough).
Definitely 2 threads.
 

harpoon84

Golden Member
Jul 16, 2006
1,084
0
0
Originally posted by: Lonyo
The Crysis demo does work with dual cores.
When I ran it, it had my CPU at about 75% use (X2 4200+) with about 75% load on each core (neither was maxxed out oddly enough).
Definitely 2 threads.

Yet on QC systems only 1 core is (almost) fully loaded while the rest sit virtually idle at ~5% usage. Doesn't smack of great core scaling to me! Hopefully it's just the demo though.
 

bradley

Diamond Member
Jan 9, 2000
3,671
2
81
The E6850 *is* a dual core, and yet the difference between the quads is negligible.
 

zach0624

Senior member
Jul 13, 2007
535
0
0
Originally posted by: bradley
You can tell all of that from some half-baked demo? Where are the authentic benchmarks, like 3dmark06 for instance? The bus and memory playing field isn't even completely level in these tests. Who's to say these results are even genuine?

I have to agree with bradley here. We are looking at one demo for a game that isn't released, no 3dmark/super pi, and while not quite as noticable it would have been better if they would have leveled out the memory across the platforms.
 

BitByBit

Senior member
Jan 2, 2005
473
2
81
Originally posted by: IntelUser2000
10% performance lag for Phenom in this game means much more than 10% elsewhere. As anybody would know, barely any applications scale close to 100% performance with clock speed increase.

How does performance scaling mean a 10% lag here means much more than 10% elsewhere? Both Intel and AMD are subject to such scaling, and a 10% lag at frequency X will usually mean 10% at Y.

This puts Phenom in the E6600 range(if Crysis doesn't benefit from quad core), or a Q6600. From the first preliminary review that indicated that 20% performance advantage over Opteron in games is basically misleading information because Opteron is dual socket and loses performance in games, Phenom is looking very unimpressive.

Firstly, your reasoning is a little off. CPU performance scaling does not mean that a processor performing 10% slower than another at 3.0GHz is actually performing like the latter would at 2.4GHz. 3 * 0.9 = 2.7. Secondly, a 3.0GHz K8 can keep up with an E6600 most of the time, so you also implied no improvement per clock.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
How does performance scaling mean a 10% lag here means much more than 10% elsewhere? Both Intel and AMD are subject to such scaling, and a 10% lag at frequency X will usually mean 10% at Y.

Because Crysis is currently the most demanding game there will be at the time of its release and therefore will be more GPU bound than anything else.

Firstly, your reasoning is a little off. CPU performance scaling does not mean that a processor performing 10% slower than another at 3.0GHz is actually performing like the latter would at 2.4GHz. 3 * 0.9 = 2.7. Secondly, a 3.0GHz K8 can keep up with an E6600 most of the time, so you also implied no improvement per clock.

I never implied 10% performance difference would equal to 3.0x0.9=2.7(to nitpick its actually 3.0/1.1=2.73).

Did you miss my sentence that said: "As anybody would know, barely any applications scale close to 100% performance with clock speed increase."

So at least in Crysis, if we take an assumption that its scaling rate with clock speed is 60%, then the clock speed equivalent will be 16.67% difference, implying Phenom 3.0 is like a 2.57GHz Core 2.

Looking here: http://www.anandtech.com/cpuch...owdoc.aspx?i=3038&p=15

The average scaling of games with 100% clock speed increase(comparing E6850 and E6550) is 13% to 88% respectively, with most standing at around 50%.

Secondly, a 3.0GHz K8 can keep up with an E6600 most of the time, so you also implied no improvement per clock.

Yea it sounds very unlikely, but it doesn't seem like it'll be outperforming by more than 10% either.

On the preliminary benchmarks Anandtech compared single socket Barcelona to a dual socket Opteron and assumed differences are what Phenom bring to X2. But benchmarks like these:
http://www.extremetech.com/art.../0,1697,1813755,00.asp
http://www.anandtech.com/cpuch...howdoc.aspx?i=3092&p=5

show that dual sockets lose performance over a single socket, meaning the differences between Phenom and X2 isn't great as shown.
 

n7

Elite Member
Jan 4, 2004
21,303
4
81
I can't get that site to load, but i am in no way surprised that Phenom loses to the current quads or even Penryn.

AMD is pretty much fucked when it comes to winning CPU/GPU performance wars right now.

It looks like for the next while they'll be a nice mid-range general user option.

For us enthusiasts, AMD = no go for the next while is my guess.
 

BitByBit

Senior member
Jan 2, 2005
473
2
81
Originally posted by: IntelUser2000

Because Crysis is currently the most demanding game there will be at the time of its release and therefore will be more GPU bound than anything else.

Gaming benchmarks aren't really the most reliable method of predicting performance. GPU bound or not, 10% (7% if you compare the average scores) here won't necessarily mean much more than 10% elsewhere.

I never implied 10% performance difference would equal to 3.0x0.9=2.7(to nitpick its actually 3.0/1.1=2.73).

You implied scaling meant that the 3.0GHz Phenom actually performed like an E6600.

Did you miss my sentence that said: "As anybody would know, barely any applications scale close to 100% performance with clock speed increase."

So at least in Crysis, if we take an assumption that its scaling rate with clock speed is 60%, then the clock speed equivalent will be 16.67% difference, implying Phenom 3.0 is like a 2.57GHz Core 2.

Again, gaming benchmarks aren't reliable enough to draw accurate conclusions about performance. Attempting to extrapolate the performance of a processor at different frequencies by guessing its 'scaling rate' is not going to produce accurate results either. And where on earth did you get 16.67% from?


On the preliminary benchmarks Anandtech compared single socket Barcelona to a dual socket Opteron and assumed differences are what Phenom bring to X2. But benchmarks like these:
http://www.extremetech.com/art.../0,1697,1813755,00.asp
http://www.anandtech.com/cpuch...howdoc.aspx?i=3092&p=5

show that dual sockets lose performance over a single socket, meaning the differences between Phenom and X2 isn't great as shown.

What I find puzzling is the huge disparity between desktop and server performance. In these results, Barcelona beats Xeon clock-for-clock in nearly every benchmark.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
Originally posted by: BitByBit
Originally posted by: IntelUser2000

Because Crysis is currently the most demanding game there will be at the time of its release and therefore will be more GPU bound than anything else.

Gaming benchmarks aren't really the most reliable method of predicting performance. GPU bound or not, 10% (7% if you compare the average scores) here won't necessarily mean much more than 10% elsewhere.

I never implied 10% performance difference would equal to 3.0x0.9=2.7(to nitpick its actually 3.0/1.1=2.73).

You implied scaling meant that the 3.0GHz Phenom actually performed like an E6600.

Did you miss my sentence that said: "As anybody would know, barely any applications scale close to 100% performance with clock speed increase."

So at least in Crysis, if we take an assumption that its scaling rate with clock speed is 60%, then the clock speed equivalent will be 16.67% difference, implying Phenom 3.0 is like a 2.57GHz Core 2.

Again, gaming benchmarks aren't reliable enough to draw accurate conclusions about performance. Attempting to extrapolate the performance of a processor at different frequencies by guessing its 'scaling rate' is not going to produce accurate results either. And where on earth did you get 16.67% from?


On the preliminary benchmarks Anandtech compared single socket Barcelona to a dual socket Opteron and assumed differences are what Phenom bring to X2. But benchmarks like these:
http://www.extremetech.com/art.../0,1697,1813755,00.asp
http://www.anandtech.com/cpuch...howdoc.aspx?i=3092&p=5

show that dual sockets lose performance over a single socket, meaning the differences between Phenom and X2 isn't great as shown.

What I find puzzling is the huge disparity between desktop and server performance. In these results, Barcelona beats Xeon clock-for-clock in nearly every benchmark.

Nothing puzzling about it. The Xeon platform was set up with FB-DIMM @ 667-CL5, which means that many benches suffer horribly. Running DDR2-800 or higher with 1T and moderately aggressive timing = massive performance boost, especially if you compare something like DDR-3 1600 to FB-DIMM 667. When the new Xeons hit w/improved bus/memory/chipset, you'll probably see a 1:1 performance ratio in server apps. On the desktop, however, Phenom will be *lucky* to ever see 1:1 vs. Intel, unless something huge changes.
 

BitByBit

Senior member
Jan 2, 2005
473
2
81
Do you really think Intel would implement a memory technology that caused performance to suffer horribly? A quick google search revealed FB RAM isn't quite the performance killer it has been portrayed as.

"Looking over the results from our tests performed, FB-DIMM memory is certainly capable of being the Speedy Gonzales of system memory. In fact, in many of the RAMspeed benchmarks, the FB-DIMM DDR2-533 was pretty much performing the same as non-ECC non-Registered DDR2-667. "

Source

The Barcelona system was also running CL5 DDR2-667, the only difference being the use of FB DIMMs.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
The AMD procs with the integrated memory controller are much less sensitive to memory performance in comparison to Intel. Run a C2D 3Ghz with 5-5-5-15 2T DDR-2 667, then run the same proc/clock with 4-4-4-12 DDR2-800 1T, and you'll see 5-15% performance delta, depending on app.
 

lopri

Elite Member
Jul 27, 2002
13,209
594
126
Originally posted by: Arkaign
The AMD procs with the integrated memory controller are much less sensitive to memory performance in comparison to Intel. Run a C2D 3Ghz with 5-5-5-15 2T DDR-2 667, then run the same proc/clock with 4-4-4-12 DDR2-800 1T, and you'll see 5-15% performance delta, depending on app.
That is a way too broad generalization. I tend to think the L2 cache of Core 2 as a 'scaling' IMC on A64. The bigger the L2, the less the impact of memory on Core 2 platform. I.E., if you compare the impact of memory on overall performance, it's not likely that Q9650 with its gargantuan L2 will see the kind of boost that E2140 will get from better memory configuration.