CPU bound on mass effect, E8400 @ 3.6ghz

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: taltamir
http://www.xbitlabs.com/articl...on-hd4800-games_7.html

Same review.. different game...

Obviously in most games the video card is the main limitation (which, btw, does not mean the CPU isn't being taxed).
But in some games the CPU begins to limit the effect.

I already agreed that it is not really CPU bound, but rather a balanced case where both are limiting factors. as it alternatves between 100% CPU with lower GPU and 100% GPU with lower CPU...

But the "any dual core is plenty for any game) is long since not true.


also... anything over 60FPS is MONITOR bound in my book. So pointless to test further.

All Gpu's are like that though. You made it seem like 4850 was something special that needed a faster cpu than X video card. Fillrate and CPU feed each other. There is point where the benefit starts diminishing and leveling off.

A modest clocked dual core is plenty for any game currently. Quad cores are not even in the equation in majority of the games out today and even if it was quad optimized like Unreal engine the difference isn't much.

BTW that Lostplanet is just a bad benchmark. It doesn't mean it's cpu bound. Especially with dx10 and some stupid shadow thingy that tanks cards.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
Yup. Where it's not gpu bound in such low resolution. Last gen parts as in 8800ultra that performs very similar to 4850?
No, whatever card they tested at 1920 is clearly GPU bottlenecked, that was the point of the test. To show there was still a difference between the AMD and Intel CPUs even in a GPU bottlenecked situation. I think FPS being cut in half from the previous CPU bottlenecking test would've demonstrated that pretty clearly. You linked some bench that doesn't even substantiate what you said, let alone have anything to do with my comment about being CPU bottlenecked with current high-end solutions.

But we are not talking about highend part but a mainstream 4850 card. Don't even bother linking me to 2 year old games on a 280gtx @ 1920x1200 because we are talking about 4850 level of performance where it becomes more limited than 280gtx at that resolution.
You might be talking about a mainstream card, but I clearly was not. I referred to high-end and multi-GPU solutions, which does impact mainstream cards when run in multi-GPU configs. In UE3 games there is clearly CPU bottlenecking even at 1920 with slower CPUs and consistent scaling with faster CPUs up to 4GHz. I wouldn't bother linking you any benchmarks anyways due to the high probability you would misinterpret them.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Azn
Originally posted by: taltamir
http://www.xbitlabs.com/articl...on-hd4800-games_7.html

Same review.. different game...

Obviously in most games the video card is the main limitation (which, btw, does not mean the CPU isn't being taxed).
But in some games the CPU begins to limit the effect.

I already agreed that it is not really CPU bound, but rather a balanced case where both are limiting factors. as it alternatves between 100% CPU with lower GPU and 100% GPU with lower CPU...

But the "any dual core is plenty for any game) is long since not true.


also... anything over 60FPS is MONITOR bound in my book. So pointless to test further.

All Gpu's are like that though. You made it seem like 4850 was something special that needed a faster cpu than X video card. Fillrate and CPU feed each other. There is point where the benefit starts diminishing and leveling off.

A modest clocked dual core is plenty for any game currently. Quad cores are not even in the equation in majority of the games out today and even if it was quad optimized like Unreal engine the difference isn't much.

Oh... what I meant is that the modern video cards... the 8800 series and above and the 4850 and above, need more then simply a "dual core"... moreover.. i meant to show that even a "lowly" 4850 is having CPU power trouble on an overclocked E8400 in mass effect specifically... which is supposedly ubergodly for gaming and not even close to being maxed out...

I also wish to find out if there are more games out there like mass effect where CPU power is such an issue, or if it is an exception to the rule.

Mass effect could use a 3+ghz quad core.

Any other game we know of that can benefit from that much CPU power?
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Akabeth
Originally posted by: chizow

I could link about a dozen reviews done at 3GHz on today's high-end parts and CF/SLI solutions even at 1920 that show CPU bottlenecking but frankly, they're quite abundant. Its going to take some time to get used to, but a 3GHz C2D isn't enough to guarantee you're not CPU bottlenecked anymore with the current high-end and multi-GPU solutions out there now.

I'd say Skulltrail is not so useless anymore eh?

This bottle necking issue is covered quite a bit in xtremesystems forum

I don't think Skulltrail is the answer, not for gaming anyways. From the reviews I've seen its considerably slower than a fast Quad-core set-up, probably due to the use of FB-DIMMs. Games don't make use of the extra CPU or more than 4 cores so you're generally better off with faster cores rather than more cores. I don't think we'll really see improvement until Nehalem which will bring an integrated memory controller, higher clock speeds and more cache. I think clock for clock it was 20% faster than C2. The other avenue would be for Devs to make better use of multithreading, but they've been very slow on that front.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: chizow
Originally posted by: Akabeth
Originally posted by: chizow

I could link about a dozen reviews done at 3GHz on today's high-end parts and CF/SLI solutions even at 1920 that show CPU bottlenecking but frankly, they're quite abundant. Its going to take some time to get used to, but a 3GHz C2D isn't enough to guarantee you're not CPU bottlenecked anymore with the current high-end and multi-GPU solutions out there now.

I'd say Skulltrail is not so useless anymore eh?

This bottle necking issue is covered quite a bit in xtremesystems forum

I don't think Skulltrail is the answer, not for gaming anyways. From the reviews I've seen its considerably slower than a fast Quad-core set-up, probably due to the use of FB-DIMMs. Games don't make use of the extra CPU or more than 4 cores so you're generally better off with faster cores rather than more cores. I don't think we'll really see improvement until Nehalem which will bring an integrated memory controller, higher clock speeds and more cache. I think clock for clock it was 20% faster than C2. The other avenue would be for Devs to make better use of multithreading, but they've been very slow on that front.

QFT on all accounts.

Although technically, if devs start making better multi threading apps, then they could scale to skulltrail as well, and make it useful...
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow
Originally posted by: Azn
Yup. Where it's not gpu bound in such low resolution. Last gen parts as in 8800ultra that performs very similar to 4850?
No, whatever card they tested at 1920 is clearly GPU bottlenecked, that was the point of the test. To show there was still a difference between the AMD and Intel CPUs even in a GPU bottlenecked situation. I think FPS being cut in half from the previous CPU bottlenecking test would've demonstrated that pretty clearly. You linked some bench that doesn't even substantiate what you said, let alone have anything to do with my comment about being CPU bottlenecked with current high-end solutions.

I was talking about 1024x768 resolution which isn't really bound by GPU. Get your quotes straight before replying with insults. :roll:


You might be talking about a mainstream card, but I clearly was not. I referred to high-end and multi-GPU solutions, which does impact mainstream cards when run in multi-GPU configs. In UE3 games there is clearly CPU bottlenecking even at 1920 with slower CPUs and consistent scaling with faster CPUs up to 4GHz. I wouldn't bother linking you any benchmarks anyways due to the high probability you would misinterpret them.

Why are you even talking about some x card. This thread is obviously about 4850 bottle necking @ 1920 resolution and not GTX50000 with trillion fillrate and billion more bandwidth than 4850. You bring up something out of context and put it into the equation like it's suppose to mean something. :roll:
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
I was talking about 1024x768 resolution which isn't really bound by GPU. Get your quotes straight before replying with insults. :roll:
LMAO get your quotes straight? You linked a bench that you apparently think was done at 1024, except its not. It just shows there is no benefit of a faster CPU when you're already GPU bottlenecked, which is obvious. Now tell me, how does that have anything to do with the incorrect and poor advice you give here?:

Considering it's based on the same unreal 3 engine it's not as cpu intensive as you think it is.

If you have a core 2 duo you are fine long as you have more L2 cache. Quad cores help some.

Absolutely minimal difference in clock speed even @ 1024x768 resolution.

http://www.anandtech.com/video/showdoc.aspx?i=3127&p=6

Why are you even talking about some x card. This thread is obviously about 4850 bottle necking @ 1920 resolution and not GTX50000 with trillion fillrate and billion more bandwidth than 4850. You bring up something out of context and put it into the equation like it's suppose to mean something. :laugh:
Because my point was that current high-end and multi-GPU configs can and will benefit from a faster CPU and will be bottlenecked with a slower CPU. To which you replied "Considering it's based on the same unreal 3 engine it's not as cpu intensive as you think it is." This is where I'd normally link comparisons with 3GHz vs. 4GHz CPU that clearly show scaling differences but there's really no point with you is there? :)

 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
Wait, you guys are saying that a high-end CPU, like a e8400, only thing faster I know of is the e8500, isn't fast enough for current gen videocards? People have to OC their e8400's to 3.6-4.0ghz to take full advantage of their HD4850's or HD4870's? Seriously, you've got to be kidding me ?
 

SSChevy2001

Senior member
Jul 9, 2008
774
0
0
They are kidding you. Currently 3.0Ghz C2D is fine for HD4850/4870 even GTX 260 and GTX 280, but when you start using SLI and Crossfire your getting vey close to being CPU bound. 3.2Ghz C2Q seem to have no problems with highend SLI and Crossfire Setups.

Honestly though who really care if your CPU bound at 100FPS. Worst case in the future Nvidia and AMD will offload some extra work to the GPU ( Physx and Havok ). Currenlty with some overclocking most CPU's should be fine.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
wrong SSChevy, I am not kidding. I am CPU and GPU limited at under 60FPS with a 3.6ghz oced E8400 and a 4850... at 1920x1200.
I said already, at 100fps you are MONITOR bound, not CPU or GPU bound... I am clearly talking about under 60fps scenarios here... if it is over 60fps, it is monitor bound, period.


Originally posted by: MarcVenice
Wait, you guys are saying that a high-end CPU, like a e8400, only thing faster I know of is the e8500, isn't fast enough for current gen videocards? People have to OC their e8400's to 3.6-4.0ghz to take full advantage of their HD4850's or HD4870's? Seriously, you've got to be kidding me ?

Dead serious. OCed e8400 is no longer enough to get you to 60+ fps FOR SOME GAMES!
Not all games... but some.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: taltamir
Dead serious. OCed e8400 is no longer enough to get you to 60+ fps FOR SOME GAMES!
Not all games... but some.

How are you coming to this conclusion, seriously? Are you assuming that just because a game uses 100% of a single or dual core cpu then it must be cpu-limited? I already explained why this isn't true. There are only 2 reasonable ways to validate your point - test a faster cpu and show that it makes a noticeable difference, or test a faster video card and show that it performs no better than the 4850.
 

Triton67

Member
Aug 6, 2007
59
0
0
'cpu limit' is when your own (max oc'd gpu) gfx card stops showing gains at current rez..it may be 12x8, 16x10, 19x12, but for modern gpus, a stronger cpu always gives higher fps....now, when rez is high and gpu is working it's ass off, cpu doesnt need to be so strong anymore, you need to tweak gfx setting to relax gpu some. Increasing cpu speed doesnt gain anything here now( if it was modern gpu and 1.8GHz c2d, a 3Ghz oc would give 50% more fps)

Always a balance FEED-(CPU)->INPUT(GPU)
 

deadseasquirrel

Golden Member
Nov 20, 2001
1,736
0
0
Originally posted by: Triton67
if it was modern gpu and 1.8GHz c2d, a 3Ghz oc would give 50% more fps

Are you saying that, with a modern GPU such as a 48xx series or gtx 2xx series, in games, you will see a 50% increase in fps with a 3ghz CPU vs a 1.8ghz CPU?

I can't imagine that to be the case except with some extreme example such as Supreme Commander at 800x600 resolution.
 

SSChevy2001

Senior member
Jul 9, 2008
774
0
0
Okay after more testing taltamir is right it's CPU bound. I tested by using the set Affinity option and set cores 1 and 2 on. With only 2 cores @ 3.45ghz I was CPU limited up to 1080p, but 1080p was borderline 1fps fluctuation. Running with 3 cores and 4 cores no problems.

My tests were done with a overclocked 8800GTS @ 821 core.

Oh well I knew the quad cores would be useful someday, just not this soon.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow
Originally posted by: Azn
I was talking about 1024x768 resolution which isn't really bound by GPU. Get your quotes straight before replying with insults. :roll:
LMAO get your quotes straight? You linked a bench that you apparently think was done at 1024, except its not. It just shows there is no benefit of a faster CPU when you're already GPU bottlenecked, which is obvious. Now tell me, how does that have anything to do with the incorrect and poor advice you give here?:

Considering it's based on the same unreal 3 engine it's not as cpu intensive as you think it is.

If you have a core 2 duo you are fine long as you have more L2 cache. Quad cores help some.

Absolutely minimal difference in clock speed even @ 1024x768 resolution.

http://www.anandtech.com/video/showdoc.aspx?i=3127&p=6

What resolution is Anandtech bench showing cpu bottleneck? You must be cross eyed who can't read the 1024x768 resolution that's on top of every graph in those benchmarks. :laugh:

I guess you feel stupid right about now for arguing how ROP was king and how G92 wasn't bandwidth starved and need to put your uneducated Nvidia fanboi 2 cents worth. :brokenheart: :laugh:

Why are you even talking about some x card. This thread is obviously about 4850 bottle necking @ 1920 resolution and not GTX50000 with trillion fillrate and billion more bandwidth than 4850. You bring up something out of context and put it into the equation like it's suppose to mean something. :laugh:
Because my point was that current high-end and multi-GPU configs can and will benefit from a faster CPU and will be bottlenecked with a slower CPU. To which you replied "Considering it's based on the same unreal 3 engine it's not as cpu intensive as you think it is." This is where I'd normally link comparisons with 3GHz vs. 4GHz CPU that clearly show scaling differences but there's really no point with you is there? :)
[/quote]

Which has nothing to do with 4850 that has gpu limit in certain resolution. Unlike multi-gpu or faster gpu that has a higher gpu-limit than 4850. :laugh:
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: SSChevy2001
Okay after more testing taltamir is right it's CPU bound. I tested by using the set Affinity option and set cores 1 and 2 on. With only 2 cores @ 3.45ghz I was CPU limited up to 1080p, but 1080p was borderline 1fps fluctuation. Running with 3 cores and 4 cores no problems.

My tests were done with a overclocked 8800GTS @ 821 core.

Oh well I knew the quad cores would be useful someday, just not this soon.

That's not what cpu bound is.
 

SSChevy2001

Senior member
Jul 9, 2008
774
0
0
@Azn

CPU bound implies that upgrading the CPU or optimizing code will improve the overall computer performance. Only running mass effect on 2 cores caused FPS lose even at 1680 1050, where 3 and 4 cores gave higher FPS. I would call that CPU bound for C2D, but maybe I'm wrong.

Edit
What I do find strange is review sites still show gains in FPS with newer card with this game, with a only a 3.0ghz e8400. So it's looks like a little of both CPU limited and GPU limited at times.
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Originally posted by: taltamir
In the meanwhile, please post any other game that you know that would be CPU bound even on 1920x1200, high settings, and an OCed E8400...

Since this never got a response, I'll bite. The answer is M$'s FSX (1st place) and Supreme Commander (2nd place), and according to you and SSChevy, it's looking to me like Mass Effect takes a distant third place.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: SSChevy2001
@Azn

CPU bound implies that upgrading the CPU or optimizing code will improve the overall computer performance. Only running mass effect on 2 cores caused FPS lose even at 1680 1050, where 3 and 4 cores gave higher FPS. I would call that CPU bound for C2D, but maybe I'm wrong.

Edit
What I do find strange is review sites still show gains in FPS with newer card with this game, with a only a 3.0ghz e8400. So it's looks like a little of both CPU limited and GPU limited at times.

cpu-bound is when you'd see no improvement from faster cards.

What you are talking about is improving performance with faster cpu.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Originally posted by: Azn

cpu-bound is when you'd see no improvement from faster cards.
Or if you see an improvement in performance from a faster CPU and/or more CPU cores without changing the video card.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
Considering it's based on the same unreal 3 engine it's not as cpu intensive as you think it is.

If you have a core 2 duo you are fine long as you have more L2 cache. Quad cores help some.

Absolutely minimal difference in clock speed even @ 1024x768 resolution.

http://www.anandtech.com/video/showdoc.aspx?i=3127&p=6

What resolution is Anandtech bench showing cpu bottleneck? You must be cross eyed who can't read the 1024x768 resolution that's on top of every graph in those benchmarks. :laugh:
OK, lets walk through this step by step since you're proving what I already know, that you can't read benchmarks.

The first set of benches titled "Clock for Clock Comparison" show a set of 3 graphs done at 3GHz @ 1024x768 with an AMD and Intel CPU. There is no other CPU at a different clockspeed in those 3 graphs that would substantiate your claim of "Absolutely minimal difference in clock speed even @ 1024x768 resolution." If they actually compared a faster/slower CPU in the first 3 graphs you might have a point, but as usual, you do not.

The second set of benches titled "GPU Bound CPU Comparison" shows the same 3 maps with 3 CPU, the same two from the previous test @ 3GHz along with a slower Intel C2D at 2.33GHz. The caption in each graph says 1024x768, but the test clearly wasn't done at that resolution, as the caption says We then cranked up the resolution to 1920 x 1200, and increased the world detail slider up to 5 to give us a more realistic situation for this clock speed comparison. Furthermore, its quite obvious the test itself is done at 1920 showing a GPU bottleneck as the scores for the 3GHz CPUs drop by ~70-120FPS. Now, how exactly would you come to the conclusion all tests were done at 1024x768 after seeing frame rates drop in half with the same hardware? LMAO.

Just admit you were wrong and move on, you only reinforce my belief you can't read benchmarks when you go on like this. In this case its blatantly obvious though. The saddest part of it is this is a bench you linked to prove your point.......LMAO.

I guess you feel stupid right about now for arguing how ROP was king and how G92 wasn't bandwidth starved and need to put your uneducated Nvidia fanboi 2 cents worth. :brokenheart: :laugh:
Mmmhmm. I'm confident with my assessment and find it still holds true with GTX 280. I'd love to rehash the topic with you but I'm not really up for playing Tech Jargon Keno with you today. :)

Which has nothing to do with 4850 that has gpu limit in certain resolution. Unlike multi-gpu or faster gpu that has a higher gpu-limit than 4850. :laugh:
And? This thread isn't just about the 4850, its about CPU bottlenecking in games, particularly Mass Effect which is indeed CPU intensive. I already acknowledged the OP probably wasn't completely CPU bottlenecked with a single 4850 since he wasn't hitting frame cap and couldn't use Very High Textures, but that would certainly change once you start looking at a 2nd 4850 in CF or other multi-GPU solutions, and even 4870/260 and the GTX 280 with single cards.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow
Originally posted by: Azn
Considering it's based on the same unreal 3 engine it's not as cpu intensive as you think it is.

If you have a core 2 duo you are fine long as you have more L2 cache. Quad cores help some.

Absolutely minimal difference in clock speed even @ 1024x768 resolution.

http://www.anandtech.com/video/showdoc.aspx?i=3127&p=6

What resolution is Anandtech bench showing cpu bottleneck? You must be cross eyed who can't read the 1024x768 resolution that's on top of every graph in those benchmarks. :laugh:
OK, lets walk through this step by step since you're proving what I already know, that you can't read benchmarks.

The first set of benches titled "Clock for Clock Comparison" show a set of 3 graphs done at 3GHz @ 1024x768 with an AMD and Intel CPU. There is no other CPU at a different clockspeed in those 3 graphs that would substantiate your claim of "Absolutely minimal difference in clock speed even @ 1024x768 resolution." If they actually compared a faster/slower CPU in the first 3 graphs you might have a point, but as usual, you do not.

The second set of benches titled "GPU Bound CPU Comparison" shows the same 3 maps with 3 CPU, the same two from the previous test @ 3GHz along with a slower Intel C2D at 2.33GHz. The caption in each graph says 1024x768, but the test clearly wasn't done at that resolution, as the caption says We then cranked up the resolution to 1920 x 1200, and increased the world detail slider up to 5 to give us a more realistic situation for this clock speed comparison. Furthermore, its quite obvious the test itself is done at 1920 showing a GPU bottleneck as the scores for the 3GHz CPUs drop by ~70-120FPS. Now, how exactly would you come to the conclusion all tests were done at 1024x768 after seeing frame rates drop in half with the same hardware? LMAO.

Just admit you were wrong and move on, you only reinforce my belief you can't read benchmarks when you go on like this. In this case its blatantly obvious though. The saddest part of it is this is a bench you linked to prove your point.......LMAO.

So you want to blame Anandtech's article on me now. Excuse me for not reading the article but the graph clearly says 1024x768 resolution. :roll:

UT3 is clearly less stressful on the video card than mass effect. If a 8800gtx can get 90fps in UT3 and 9800gtx only gets 50fps obviously CPU could be a factor with UT3 but lesser of that effect with Mass effect. The graph shows cpu isn't bottlenecking at 1920x1200. Core 2 duo @ 3.0ghz is getting whole 1-3 fps faster than 2.33ghz. :roll:

I guess you feel stupid right about now for arguing how ROP was king and how G92 wasn't bandwidth starved and need to put your uneducated Nvidia fanboi 2 cents worth. :brokenheart: :laugh:
Mmmhmm. I'm confident with my assessment and find it still holds true with GTX 280. I'd love to rehash the topic with you but I'm not really up for playing Tech Jargon Keno with you today. :)

Because it's already proven your ROP isn't the limiting factor. :laugh:

Look at 4870 with 16ROP breathing down on 260gtx with 28ROP. :brokenheart: :(

Just admit you were wrong and move on, no need to bring out an old thread just to prove you were wrong. :D


Which has nothing to do with 4850 that has gpu limit in certain resolution. Unlike multi-gpu or faster gpu that has a higher gpu-limit than 4850. :laugh:
And? This thread isn't just about the 4850, its about CPU bottlenecking in games, particularly Mass Effect which is indeed CPU intensive. I already acknowledged the OP probably wasn't completely CPU bottlenecked with a single 4850 since he wasn't hitting frame cap and couldn't use Very High Textures, but that would certainly change once you start looking at a 2nd 4850 in CF or other multi-GPU solutions, and even 4870/260 and the GTX 280 with single cards.

This thread is obviously about 4850 first off. The limitations between 4850 and 280gtx is different or multi-gpu configurations. You are going to bring some card twice as fast to prove that it's CPU limited @ 1920x1200 resolution?

Second you have brought no proof that 4850 is bottle-necking by CPU in mass effect @ 1920x1200. :disgust:
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: BFG10K
Originally posted by: Azn

cpu-bound is when you'd see no improvement from faster cards.
Or if you see an improvement in performance from a faster CPU and/or more CPU cores without changing the video card.

Yes, this is what I meant when I said bottle necking. The "absolutely no improvement" would be "eXtreme TM bottle necking"


I really wish I could do some tests to put some more proof on the table... but the problem is that riva tuner does not work with 4850 as of yet.

So looking at a FPS meter in game in window mode that updates every frame, while looking and windows vista CPU utilization meter, and the CCC GPU utilization meter, and just looking at how smooth the game play feels... well it is very innacurate. I want hard numbers. Which I will make to either prove or disprove my initial theory (notice I don't care about "winning" against other people in "arguments"... I am here to learn and teach... right now we are in the "Sharing a theory stage" because the tools to verify it are lacking).

I was going to mod riva tuner's cfg file to make it work (like i did with the 8800GTS 512 before riva tuner was updated for it)... but from reading about it that would not work here, there is work that needs to be done by the riva team first.

EDIT:... although... if a situation where I am getting less then 60 fps, 100% cpu, and under 80% GPU utilization on a 3.6ghz C2D... and others are getting 70% utilization on a 3.6ghz C2Q (40% more power).... well, I don't know about you but this shows me that there is a huge room for improvement in CPU power... Also, why would it stop at 70% if it is doing those "call aheads" you described... except for maybe scaling inefficiency from 2 to 4 cores... it needs further testing...


EDIT:
Just tested!

I went and I kept my settings the same.. the ones that alternated between 80-100CPU and 70-100GPU...
And I just lowered the resolution to 720x480 (window mode like before)...

1. The game is NOT smooth, it has the same jitter as before! (which btw, is identical to the jitter with high instead of very high textures, I think it was just a placebo for me to play at high)
2. CPU is steady at 90-100% use.
3. GPU is at 30% use or less.

FPS jumps between 20 and 120, usually I see a "6#"... again it is moving far too fast for me to be too clear here, but it definitely dips into the 20s, just like before. (before it jumped between 20 and 60).

If this is not a clear cut case of being CPU bound I don't know what is.. the CPU is obviously the cause of my "jitters", at 1920x1200 and at 720x480.

I am now going to check the wattage impact of speed step since it supposedly lowers performance by 3% or so... And I am gonna try to push my OC to 4ghz (which I thought was pointless before)... or trade up to a quad core and OC it past 3.5...
 

SSChevy2001

Senior member
Jul 9, 2008
774
0
0
A quad core is your best bet. I was getting CPU bound even at 3 cores @ 3.0Ghz. So unless your overclocking that C2D over 4.5ghz It still going to be holding back your video card.