Cogman
Lifer
- Sep 19, 2000
- 10,286
- 147
- 106
I think its rather simple how Bloomfield vs Thuban will end up.
Look at Xeon 5500 vs Opteron 24xx: http://anandtech.com/show/2774/5
There's no situation where the 6 core Opteron can beat the 4 core Xeon 5500. Since the apps tested are more multi-threaded, that'll be representative of how it'll perform with the Thuban vs. Bloomfield in heavily multi-threaded apps: i7 will have slight advantage
All other minus games: Core i7 way faster
Games: Probably i7 will retain similar lead as it did with the 4 core Phenom IIs. Games do run faster on 4 cores than on 2 but its not a big change because of the way games distribute threads to the CPU. They do what I'd call it a "dumb" way, because 1 core is still utilized heavily while others aren't.
1 core: AI+Drivers+Physics+Misc
2 core(big improvement):
Core 0 - Drivers and Physics
Core 1 - AI + Misc
4 core(small improvement):
Core 0 - Drivers
Core 1 - Physics
Core 2 - AI
Core 3 - Misc
What can they do with 6 cores? There's not much they can do above 4, easily.
About the "dumb" way games do multi-threading. What you underestimate is how much processing time each of those items takes. The dumb way to do things is the way you just advocated. AI doesn't take a significantly large portion of processing time (unless you throw pathfinding into the mix, in which case it might.) I don't know what you mean by drivers.
A large portion of game time is wasted in the game loop itself, a significant portion of that is dedicated to rendering, something that is pretty hard to break up into different threads (the rendering order needs to be sent to the gpu in order so that it knows how to handle things like transparency, or what to exclude in rendering.)
Good threading isn't saying "Ill put this portion of the game on this CPU, and this portion of the game on that CPU". It is saying "OK, where are we spending the most time in this application, and can it be somehow processed concurrently instead serially". It doesn't even take into account the number of CPUs available, just if the task can be broken up into concurrently running chucks. A good parallel piece of software will use a threadpool to get the job done rather then worry about how it is going to divide up each cpu.
Back on topic. It has been a while since the multicore paradigm shift. For me, I pretty much always try to think of "How can this task be split up into multiple threads" when doing something. It will be interesting to see if more applications start to really take advantage of the ever increasing core count.
