Dual core CPU utilization test - X2 4400+

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

orangat

Golden Member
Jun 7, 2004
1,579
0
0
Markbnj, why don't you benchmark to compare fps before and after the Nvidia upgrade? Is there a change in fps?
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
Because others have benchmarked fps using these drivers already. My purpose in installing them was to check the effect on cpu core utilization. I never checked the fps of bf2 before I installed them, but I have checked it with them in. At 1600 x 1200, max draw distance, and all settings high I am getting 40-45 fps, which seems pretty good for my card. If I uninstall the drivers I will see if the fps decreases and by how much.
 

ocforums

Junior Member
Sep 25, 2005
20
0
0
Originally posted by: Markbnj
Well, I didn't say that FPS didn't matter as much. What I said was that it isn't the only thing that matters. Frames per second measures how fast the system can construct and render a frame of video. In a single threaded loop architecture (most current games) anything that slows down that loop will slow down the frames per second. But it's not quite that simple. Windows uses multiple threads in the 3d graphics and sound subsystems, and in the I/O layer, as examples. So you can be chugging along rendering 75 fps and have network updates lag, causing sprites to jump around on screen. You've still got high performance graphics, but you don't have an overall high performance game at that point. There are lots of other potential examples. These conditions will become more common as more games are designed for concurrent processing.

On the cpu question, think about it like this: at 50% utilization a given core is executing instructions just about as fast as it possibly can, because there are no bottlenecks; its utilization is way below max. Now imagine a unit of work. You can either hand the whole unit to a single processor, or divide it in half and hand each half to a seperate processor. If you do the latter then that unit of work will complete in roughly half the time. I think where you are getting confused is in thinking that 50% core utilization equates to 50% speed. It doesn't. The cpu is still chewing through instructions as fast as it can. It's just that it could chew through twice as many in the same amount of time, if the rest of the system could keep up.


I am a bit lost with this debate.Is it being said the dual core can process a thread twice as fast as a single core ? Thats just silly and proven totally incorrect .Matter of fact only because AMD has set 2 cores side by side and removed bus sharring have you finally got a dual cpu even close to a single core in game performance.

Sli 7800 GTX 535/1335
FX 57@ 3051
14270 3D 03

Sli GTX 535/1335
X2@ 3051
14011 3D 03
(Extreamoverclockers.com reference for above systems)

In a 3d 03 bench test of the above systems with hardware pretty equil the FX will score about 200 Points more then the X2.This shows the systems game ability to be almost equil.It has also been shown if you encode a video and play battlefield 2 at the same time the loss in FPS is as close to zero as you can get when using the X2,and the fx falls on its face totally . .

And with any single threaded app windows can not split the work unit up,What happens in a dual enviroment is the threads are load balanced as equially as possible,Meaning threads are sent to what ever processor is free to exicute them,Some threads take a tad longer to exicute then others and thats why you can never get an exact 50% load balance as well as having other apps that require processor time which also will be scheduled along with the games threads, making the 50% load almost never perfectly 50% to each core.

No matter what happens in the load balance all you can ever have per thread is the speed on 1 single core IE dual core at 2.5 means you can never have more then that 2.5 as a max speed per thread and when you add memory and cache sharing thats where you will get a bit of over head causing the dual core to be slightly slower then a single core in a game.

Additionally it is in the threads schedualing that you get the "Smoothness" everyone is mentioning,,And that is easy to explain because you have 2 cores to process threads thus removing the wait time when you are multitasking . . .

Can a dual core ever be twice as fast as a single core ? Not in a single threaded app,And in a multi threaded app expect up to 70% as a max speed increase but never expect a doubling of speed from a dual core and the 70% MAX increase very seldom,Usually 20% or so is what you get in real life when you are using a multithread app . .

.
 

imported_michaelpatrick33

Platinum Member
Jun 19, 2004
2,364
0
0
Originally posted by: orangat
ElTorrente, I never said X2 don't have any worthwhile advantages. You making a strawman argument again.

Apart from games X2's are great cpus and at speeds above 4200+ X2s are start to become faster than single cores.


?
What exactly does that mean?
You were shown benchmarks that show the X2's are equal or better performing gaming cpus as their single core counterparts. Every single benchmark I have ever seen shows the same clockspeed rated dualcore matches a singlecore. Yes, the PR ratings are higher on a dualcore because of their ability to do much more than a single core overall but that doesn't change their single core ability. (And 1 or 2 fps don't count as lower performance for single or dualcore in benchmarks)

Explain how two 2000ghrtz processors together with 512cache each (X2 3800+) is going to perform slower than a equivalent singlecore 3200+?
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
[Is it being said the dual core can process a thread twice as fast as a single core ?]

Of course not. But if you have two threads each doing a large chunk of the work, then the dual core will be superior. This part of the debate started because perfmon shows BF2 heavily utilizing both cores with the beta nVidia drivers.

[Can a dual core ever be twice as fast as a single core ? Not in a single threaded app,And in a multi threaded app expect up to 70% as a max speed increase but never expect a doubling of speed from a dual core and the 70% MAX increase very seldom,Usually 20% or so is what you get in real life when you are using a multithread app ]

I don't disagree with this, but then, show me a single-threaded app running under Windows that anybody cares about? On my system at home, after boot, 2 out of 39 processes are running one thread. The other 37 run from 2 to 60 threads.

As I said in the beginning of this thread, I invite others to run their own tests and report the results. Running BF2 with the nVidia release drivers showed both cores being utilized, but one much more than the other, and with some interesting symmetry. Running BF2 with the nVidia beta drivers showed both cores heavily utilized, and very little symmetry. So something is taking advantage of the second core in the BF2 process.
 

imported_michaelpatrick33

Platinum Member
Jun 19, 2004
2,364
0
0
Originally posted by: ocforums
Originally posted by: Markbnj
Well, I didn't say that FPS didn't matter as much. What I said was that it isn't the only thing that matters. Frames per second measures how fast the system can construct and render a frame of video. In a single threaded loop architecture (most current games) anything that slows down that loop will slow down the frames per second. But it's not quite that simple. Windows uses multiple threads in the 3d graphics and sound subsystems, and in the I/O layer, as examples. So you can be chugging along rendering 75 fps and have network updates lag, causing sprites to jump around on screen. You've still got high performance graphics, but you don't have an overall high performance game at that point. There are lots of other potential examples. These conditions will become more common as more games are designed for concurrent processing.

On the cpu question, think about it like this: at 50% utilization a given core is executing instructions just about as fast as it possibly can, because there are no bottlenecks; its utilization is way below max. Now imagine a unit of work. You can either hand the whole unit to a single processor, or divide it in half and hand each half to a seperate processor. If you do the latter then that unit of work will complete in roughly half the time. I think where you are getting confused is in thinking that 50% core utilization equates to 50% speed. It doesn't. The cpu is still chewing through instructions as fast as it can. It's just that it could chew through twice as many in the same amount of time, if the rest of the system could keep up.


I am a bit lost with this debate.Is it being said the dual core can process a thread twice as fast as a single core ? Thats just silly and proven totally incorrect .Matter of fact only because AMD has set 2 cores side by side and removed bus sharring have you finally got a dual cpu even close to a single core in game performance.

Sli 7800 GTX 535/1335
FX 57@ 3051
14270 3D 03

Sli GTX 535/1335
X2@ 3051
14011 3D 03
(Extreamoverclockers.com reference for above systems)

In a 3d 03 bench test of the above systems with hardware pretty equil the FX will score about 200 Points more then the X2.This shows the systems game ability to be almost equil.It has also been shown if you encode a video and play battlefield 2 at the same time the loss in FPS is as close to zero as you can get when using the X2,and the fx falls on its face totally . .

And with any single threaded app windows can not split the work unit up,What happens in a dual enviroment is the threads are load balanced as equially as possible,Meaning threads are sent to what ever processor is free to exicute them,Some threads take a tad longer to exicute then others and thats why you can never get an exact 50% load balance as well as having other apps that require processor time which also will be scheduled along with the games threads, making the 50% load almost never perfectly 50% to each core.

No matter what happens in the load balance all you can ever have per thread is the speed on 1 single core IE dual core at 2.5 means you can never have more then that 2.5 as a max speed per thread and when you add memory and cache sharing thats where you will get a bit of over head causing the dual core to be slightly slower then a single core in a game.

Additionally it is in the threads schedualing that you get the "Smoothness" everyone is mentioning,,And that is easy to explain because you have 2 cores to process threads thus removing the wait time when you are multitasking . . .

Can a dual core ever be twice as fast as a single core ? Not in a single threaded app,And in a multi threaded app expect up to 70% as a max speed increase but never expect a doubling of speed from a dual core and the 70% MAX increase very seldom,Usually 20% or so is what you get in real life when you are using a multithread app . .

.

Actually I have seen many instances in Photoshop with 90% plus increases in performance. 70% is most definitely not the max. Doubling your performance on dualcore doesn't happen (every one in a great while) in multi-threaded applications but there are instances of more than 70% increases. I know Duvie has looked into this more than I have.
 

orangat

Golden Member
Jun 7, 2004
1,579
0
0
Originally posted by: Markbnj
.............
As I said in the beginning of this thread, I invite others to run their own tests and report the results. Running BF2 with the nVidia release drivers showed both cores being utilized, but one much more than the other, and with some interesting symmetry. Running BF2 with the nVidia beta drivers showed both cores heavily utilized, and very little symmetry. So something is taking advantage of the second core in the BF2 process.

That something might be Nvidia offloading more the triangle setup to the cpu.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
Yep, could be. But that should free up GPU cycles to do more textures, shaders, and lighting. Hard to say exactly, but something is benefiting.