Did you just call Ruby a he?
Yes. I think enough people pointed out that Ruby isn't a he though.
Wouldn't PCIe 3.0 double that max bandwidth to 36GB/s? Then what if this processor could handle quadruple channel ram at 8.5GB/s x 4 = 34GB/s? Then QPI's the new bottleneck, right?
I haven't seen a benchmark that showed GPUs taking advantage of PCI Express 2.0, it would be even more true with 3.0. Video cards have so much memory that's also super fast to care much about PCI Express link speeds. Remember we are also theorizing that there would be greater than 25GB/s communication happening between CPU and GPU. I don't think that happens.
Besides, the PCI Express controller will be right on the X58 chipset, and won't have to go through QPI. If the PCI Express controller was on the CPU and had to go through QPI to get to the video card I could understand, but its not.
I do have a question for you guys though. When disabling cores on an i7, it still keeps the whole 8mb of l3$ right? And if so does it still leave the area for the unused processors l2$ used or does the single core get the full 8MB to itself?
If it was designed properly it would act just like an single core processor with L3 cache.
2x clocked with x cores vs 1x clocked with 2x cores comparison:
2x clocked with x cores scaling limiters:
-Memory latency. 100 cycles at 3.2GHz equals 200 cycles at 6.4GHz
-Memory bandwidth
-Large caches that clock near/identical to core clock speeds to counteract having higher memory latency. It's likely you still need L3 cache
1x clocked with 2x cores scaling limiters:
-Memory bandwidth. Latency won't be much of a problem as bandwidth since relative latency will be lower
-Still need large caches to counteract having 2x more cores but with similar bandwidth.
-Programming limitations. You'd want a well threaded program to perform well. There's no such thing on a 2x clocked core with 1x cores
-Cache coherency. The dedicated caches per core and the last level cache would need to be synchronized well. Again, no problem with 2x clocked core with 1x cores
The intercore communication, cache coherency, and programming limitations will hamper a 1x clocked, 2x core, while retaining all the problems of scaling on a 2x clocked, 1x core CPU.
Personally, I'd think the ideal amount of cores lie around 4. A single core would always run full load with single app, so it'd sacrifice responsiveness. Dual cores will mitigate that a lot. Quad yet might help further still. Beyond that is completely overkill.