Do you know of any tests that directly compare otherwise identical CPUs clocked at the exact same frequency with the only difference being 6MB L3 as opposed to 8MB L3? Ideally I'd like to see how it affects each of the Intel's recent architectures i.e SandyBridge, IvyBridge and Haswell. I wonder if there are differences in how each architecture benefits from the additional cache. Intel seems to put a very hefty premium on that additional cache. For example Core i7-3740QM is $378 and Core i7-3820QM is $568.00, I know these CPUs weren't launched at the same time so they aren't directly comparable pricing-wise, the CPU that's directly comparable is Core i7-3840QM which offers a mere 100MHz frequency bump and that additional cache all for 50% price hike. What adds more performance, the additional cache or those additional clock cycles? While 8MB L3 IB model also had 50MHz more in GPU turbo in HW that advantage, however small, goes away. Core i7-4810MQ 2.8GHz 6MB L3 $378,
i7-4900MQ 2.8GHz 8MB L3 $568 I would like to see those CPUs compared directly. I know it's mostly an academic comparison because the real upgrade is Core i7-4910MQ which is just a mere 100MHz clock bump over 4900MQ. Why does Intel price their fully functional CPUs so steeply? Is the performance increase from this additional cache more substantial than the performance increase from a mere 100MHz bump? I had 2500K so I could do some tests and then record the results and then compare them to my current CPU with HT disabled, all at the same frequency of course, but I didn't think of that at the time. Now I'm interested. I remember some benchmarks with a CPU that had L3 turned off entirely and then with L3 on all the way up to 8MB with 2MB increments. Does anyone remember that test? How do I turn off a portion of my CPU's L3?
i7-4900MQ 2.8GHz 8MB L3 $568 I would like to see those CPUs compared directly. I know it's mostly an academic comparison because the real upgrade is Core i7-4910MQ which is just a mere 100MHz clock bump over 4900MQ. Why does Intel price their fully functional CPUs so steeply? Is the performance increase from this additional cache more substantial than the performance increase from a mere 100MHz bump? I had 2500K so I could do some tests and then record the results and then compare them to my current CPU with HT disabled, all at the same frequency of course, but I didn't think of that at the time. Now I'm interested. I remember some benchmarks with a CPU that had L3 turned off entirely and then with L3 on all the way up to 8MB with 2MB increments. Does anyone remember that test? How do I turn off a portion of my CPU's L3?