Originally posted by: Sahakiel
Originally posted by: iwantanewcomputer
mamisano, does the distance form the processor to the memory make a difference? i know the traces have to be longer, but why would it make it impossible when you are using an integrated mem controller instead of separate.
anyway if they could integrate the mem controller and get a performance boost like th k8, they would scrap btx in a sec in favor of it.
ps. keep in mind that intel is screwed for at least the next year in high end, but this isn't where most sales are, and amd can only supply at most 30% of the market until fab 36 comes online at 65 nm in 2006
Uh... Do you even know WHY integrating a memory controller helps performance? It cuts down on latency, which has been the BIG problem with memory for the past fifty years (Yes, that's fifty as in 50). If you integrate the memory controller and then shove the memory far away, about the only thing you're doing is cutting out the ability to upgrade. We don't live in a world of instantaneous signaling (except at sub-molecular scales).
The primary problem with integrating is lack of flexibility. You won't be able to produce one core that can work with evolving technologies and market demands. We've been talking about highly integrated platforms and internet access/word processing in every household. Yet, the market seems unwilling to ditch low-cost flexibility. Consumers seem much more inclined to spend $30 here, $20 here every 1-2 years rather than $200 in one go every 4-5 years.
Also, anyone bashing Intel for not meeting frequency targets should take a look at the AMD camp and start listing the numerous instances of revised roadmaps scaling back frequency targets. Apple's G5 is also having similar problems since it doesn't seem they'll hit the 3GHz promised some time back. 90nm caught the entire semiconductor industry with its collective pants down. By the time it became apparent the problem lay with the technology and not because of Company A's couldn't hack it, everyone had invested over years of time and money. A complete about face would've meant literally billions of dollars down the drain with nothing to show for it.
Now, everyone is talking multi-core and functionality. Moore's Law still marches on, but it looks like it's time to find a new use for transistors other than more cache. However, that may actually be a good thing, seeing as how latencies are killing performance at high clock speeds. Decreasing cache size to meet latencies introduces new problems.
If you think multi-core will kill gaming, then it's time to open your eyes. GPU's (or VPU's) have offloaded work from the CPU for years. That's technically multi-processing. I don't see anyone complaining.
Beyond graphics, which is embarrassingly parallel, it's at most a 2 second exercise to find something else in the game that can be easily run across multiple processors: physics. If that doesn't float your boat, try A.I. or animation. For games, I'm waiting on physics and animation. When a human running around in the rain looks like a human running around in the rain, then you can say we don't need more processors. Heck, we're not even correctly animating cars driving around and cars are an easier problem. There are a lot more areas in games that can benefit from parallel execution aside from the three I mentioned.