I posted this on some other forum:
Now, I may not be 100% correct here, so feel free to correct me.
This is the response I've received. I just want to put this up for open debate.
So my question is:
How does this dual core optimization work, exactly? Is it really just a graph trick?
I want the real technicalities, here. Can drivers really use two cores?
My point is that an FX57 isn't really great. It's the fastest of the single core, but dual core is much better. You may only get 100 fps on dual core compared to 105 fps on an FX57, but with the potential of another core sitting there.... imagine if it gets used.
Come to think of it, dual core is already being used. Since you're going for an nVidia card, current 81.* series drivers offloads some of the driver overhead onto the 2nd core. The result is much better FPS in CPU-limited situations. I have a link around here somewhere.
http://www.hardwareoc.at/Geforce-7-Treibervergleich-7.htm
There. See the big spikes in the graphs with some of the 81.* series? That's dual core for ya. Some of them aren't getting dual core action as these things are all still in betas, but you can see the potential right there. On an X2 4800+, the difference is single core is in the 70's and dual core lands in the 110's.
Now, I may not be 100% correct here, so feel free to correct me.
This is the response I've received. I just want to put this up for open debate.
Currently no game, even doom 3 supports dual core. The Unreal Engine or Doom 3 Engine will use one of the two cores, and not tap into the second core. The Operating system internals have to be powered by the same core, or latency between cores cause performance problems.
I experienced this first hand using dual core technology. Great for media, lousy for gamers unless you want to go in style and launch FRAPS and play games together.
Graphic cards don't have any real programming to go from one core to another. You can trick a computer into loading drivers into other cores, but since it has nothing in the same core that is functioning against the program, what that leaves you with is corruption and nothing direct to apply it to and a lot of crossover latencies.
There is a trick to those graphs that really is dishonest to consumers. If you would so like, I can explain that to you, though now is not the time Artemis.
As it stands the Unreal Engine 3.0 has no support for Dual Core technology, because you can't take an entire program and split it between two cores unless you have a dual program, in other words...a program in which one .EXE actually triggers a second, third or fourth EXE to look for dual core technology and apply sections of those programs directly to different cores, with algorythms written that would allow programs to crossover between cores and memory to support one another,
Currently no such programs exist.
In a Computer Science Course, the professor gave me a simple assignment. He said
"I want you to calculate powers of 16, on the hexadecimal system and I want you to take the class with the main and apply it to the first core, and then write your Functions as seperate class files. These files are to be accessed by the second core of a system"
The result:
The program fails because it searches the core of the processor and then the main memory. Core 1 loads the file with the main, core 2 loads the files that are not part of the main. However in order for the program to work, the cores need direct access. This is not the case as 1 processor is computing the class with the main and the second processor is computing the rest, and can not account for time. Cores work independently and do not trade what they are doing between each other.
Even setting scripts for Public access causes them to fail, because the entire program must be local to the same core and the processor has no way of keeping tabs between both cores. Its like having a dual processor in 1 processor. Two cores with independent programming without anything binding them.
We currently are learning on how to program to utilize different parts of several processors, but even simple programs are having problems running when split apart into different cores. The future of programming is...
I have a game...when you access one EXE, 4 seperate programs will run and 2 will affect 1 core, 2 will affect the other and a fifth program will run twice on each other them, binding the central program, launched on both systems.
I have a lot to say about this....There are advantages and disadvantages to this. I've had the dualcore experience.......and its really different. Its not UTOPIA as some people believe it to be.
So my question is:
How does this dual core optimization work, exactly? Is it really just a graph trick?
I want the real technicalities, here. Can drivers really use two cores?