Originally posted by: RaptureMe
Plus I am cunfused about amd quads which has 3 levels of cach vs. intels 2 levels?
I am starting think when apps start to get updated to take advantage of the extra levels of cach amd may fly by intels quads or am I just dreaming?
Applications are typically not coded to "take advantage of cache"...usually its the other way around and the cache is there to take advantage of the applications.
Generally applications are cache "unaware". Compilers are cache aware and when compiling source for a target architecture then the compilers will do things to attempt to take advantage of cache arrangements to speed up execution speeds.
But again by the time codes get recompiled and distributed to the distribution streams the hardware has usually iterated another generation beyond, leaving intact a perpetual lag between software optimizations and hardware capabilities.
Originally posted by: Rhoxed
Not to deny Intel of anything (they are definately faster for the $$ right now) But when overclocking the Phenom 9850BE it actually scales better and preforms better at 3Ghz + then the Q6600 (or 65nm Intels) Say you have a 3.2Ghz Phenom vs a 3.2Ghz Intel, the phenom will win in most benchmarks. (keeping 65nm as the rule)
I will be utterly amazed if the Phenom does not require at least 50% more power consumption than the Q6600 when both are clocked to 3.2GHz on their minimum Vcore needed for stable operation at that clockspeed.
