• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Ivy Bridge vs Haswell

GWestphal

Golden Member
Haswell is being reported as a 30% reduction in power as compared to Sandy Bridge, so how will it compare to Ivy Bridge? It seems like the Tri-gate is only major change that is happening and that is common to both IB and HW, so will power consumption on both be pretty similar?
 
SB power consumption was better than Westmere even though they both were on 32nm. I expect Haswell to include other power saving improvements other than just being tri-gate 22nm. So yes, I think Haswell will be more power effecient than IB.
 
I read that Haswell will use some major power saving technique that supposedly can idle like under 10W total for the entire system (probably not including gcard). of course, these are just rumors, who knows what actual improvements are in store for haswell. But from the look of things Haswell will concentrate on power improvements.
 
now if AMD and nVidia could starting producing more power efficient parts so the GPUs don't require 300W.

nvidia and AMD did make some real progress in idle power use, noise and heat in the 5xx / 6xxx cards. The 68xx/69xx and my GTX 560 are all much better at idle than my old 4870 was.

Not so much in load power, but that's partly from being stuck at 40nm longer than planned.
 
I'm sure they do have lower performing parts that use less, but they also have less horse power. It just seems like CPUs more consistently make power efficiency a priority. Look at Intel P4 was a beast like 135W, now the Core series is at something like 75W and 15-30W for mobile. In that same time nVidia has gone from about 100W to sometimes over 400W on their cards. Obviously some of that is process size, but 40nm isn't that big compared to SNB at 32nm. Maybe tri-gate will get licensed all over though which would help.
 
Last edited:
Intel still make 130W CPUs, they simply don't market and sell them to the average consumer. Xeon's can come with high TDPs. SB-E is going to be so hot Intel is wanting to ship watercooling with it for the first time.

But those CPU's are expensive and the benefits are normally only for highly threaded scenarios and on motherboards with SAS and other such unnecessary tech. So most people don't even consider those chips, but the high TDP stuff still exists.

For GPU's however we don't have such a monopoly driving the price to 1000's of dollars for the high end so they compete at more palatable prices to the peak of the available power.
 
If Haswell will be 30% more power efficient than Sandy Bridge (~67W TDP vs 95W TDP), while having twice as many cores, that would be a pretty solid step forward in CPU tech.
 
If we get twice as many cores without loosing clock speed then it will be a worth while upgrade, sort of. SB-E appeals for the same reason as it'll have 6 cores and hence might offer a genuine 100% boast once you take into account SB IPC advantages, additional overclocking headroom and the extra cores. But it is still only better in limited circumstances.

After programming concurrency programs for many years I can say quite safely it'll be a while before we can use multiple cores well.
 
Personally I'm not interested in more cores, and I'm also not so much interested in having a laptop CPU in my desktop (sub-95W).

If AMD and Nvidia can figure out a way to cool >300W GPU's then I'm ready for Intel and AMD to open up a product lineup that goes there too.

And do it the good old fashion way, keep it to 4-6 cores and just give me some nice high clockspeeds. With configurable TDP, if I don't want 300W then I can set it to 95W and have the clockspeed throttle itself accordingly.
 
The mainstream CPUs have been "stuck" at quad-cores for too long: Q6600 => Q9400 => i7-860 => i7-2500. Its time to move on to eight cores! Too bad its not until Haswell.
 
This Haswell will be the end of overclocking as we know it.
The IB only gives you better Graphics.
I see two Haswells one for the high end servers at and one for laptops and low tdp desktops.
 
Last edited:
This Haswell will be the end of overclocking as we know it.
The IB only gives you better Graphics.
I see two Haswells one for the high end servers at and one for laptops and low tdp desktops.

What makes you say that? If anything, the number of models have been increasing, not the other way around...
 
I think its to early to report on haswell since we still have around 16-18 months before we see it.

Intel is good at increasing performance and reducing power consumption, so I would expect the same from IB and haswell.
 
What makes you say that? If anything, the number of models have been increasing, not the other way around...

Trigate gave intel a faster then normal way into a low watt and mobile market.
Why did intel cut back on SB-E goodies its like intel is trying to get to the smaller Haswell as fast as it can.
 
Trigate gave intel a faster then normal way into a low watt and mobile market.
Why did intel cut back on SB-E goodies its like intel is trying to get to the smaller Haswell as fast as it can.

While I don't disagree that Intel is focusing on low-power, I doubt they would only have a few models. Too few price-discrimination opportunities :biggrin:


It will be interesting to see how Intel develops their configurable TDP feature from IB to Haswell. It seems borderline silly that we used to run our CPUs at a constant speed back in the day...
 
Personally I'm not interested in more cores, and I'm also not so much interested in having a laptop CPU in my desktop (sub-95W).

If AMD and Nvidia can figure out a way to cool >300W GPU's then I'm ready for Intel and AMD to open up a product lineup that goes there too.

And do it the good old fashion way, keep it to 4-6 cores and just give me some nice high clockspeeds. With configurable TDP, if I don't want 300W then I can set it to 95W and have the clockspeed throttle itself accordingly.

They already did that. It's called the K series.
 
They already did that. It's called the K series.

I have a 2600K, it consumes ~270W at 5GHz with IBT.

But it doesn't have a warranty now that I OC'ed it, the expected lifetime of the CPU is questionable, and I have zero confidence in the CPU correctly processes the 700+ instructions in its ISA when operating at that clockspeed because all of our stress test apps merely focus on a few select intructions to test for correct output.

It would be great if Intel actually verified and binned 5GHz 300W TDP sandy's.
 
I have a 2600K, it consumes ~270W at 5GHz with IBT.

But it doesn't have a warranty now that I OC'ed it, the expected lifetime of the CPU is questionable, and I have zero confidence in the CPU correctly processes the 700+ instructions in its ISA when operating at that clockspeed because all of our stress test apps merely focus on a few select intructions to test for correct output.

It would be great if Intel actually verified and binned 5GHz 300W TDP sandy's.

thats not right,my 2600k at 5.1ghz with 8 threads and avx is about 136 W are you measuring total system power draw?

This is with 1.55 v core and its stable at 5.3ghz at this voltage

155vk.jpg
 
Last edited:
I have a 2600K, it consumes ~270W at 5GHz with IBT.

But it doesn't have a warranty now that I OC'ed it, the expected lifetime of the CPU is questionable, and I have zero confidence in the CPU correctly processes the 700+ instructions in its ISA when operating at that clockspeed because all of our stress test apps merely focus on a few select intructions to test for correct output.

It would be great if Intel actually verified and binned 5GHz 300W TDP sandy's.

That's an excellent point. Especially when those calculations are (financially) important.

Your post is making me wonder what (if anything) Via does (IIRC, they use Intel CPUs OC'ed to 5ghz to do modeling).

Though, you have to wonder about the Perf/watt of OCing that high. For many (most) people just using more (slower) cores is probably a winning proposition.
 
Personally I'm not interested in more cores, and I'm also not so much interested in having a laptop CPU in my desktop (sub-95W).

If AMD and Nvidia can figure out a way to cool >300W GPU's then I'm ready for Intel and AMD to open up a product lineup that goes there too.

And do it the good old fashion way, keep it to 4-6 cores and just give me some nice high clockspeeds. With configurable TDP, if I don't want 300W then I can set it to 95W and have the clockspeed throttle itself accordingly.

+1. I'd like to have one of these.
 
If there are so many 2600Ks stable at 5200+ why dont I ever see anyone doing anything like F@H at over 5000 Hhz.
The cpu can and will correct a small amount of errors from IBT.
 
Last edited:
Id love a 5ghz stock cpu...but what would the price tag be for it?? $10000?? Lol if inte priced them affrdable then the rest of their cpus would become dirt cheap...bye bye amd
 
If there are so many 2600Ks stable at 5200+ why dont I ever see anyone doing anything like F@H at over 5000 GHhz.
The cpu can and will correct a small amount of errors from IBT.
Good point. There aren't many 2600K's folding at high OC's because they would fail a lot of units. Sure, you can test for stability with Prime95 or Linx, but just because it doesn't crash or give you errors doesn't mean it is stable. Folding is quite intolerant of instability even though the actual stress level on the hardware is less than Prime95 and others.

There's no telling when or what sort of error you might get from a failed unit. If you're lucky you'll get a BSOD, and then you'll know right away to adjust your OC. You might make it all the way to 99% completion before it fails. You might even complete the unit seemingly well, but then the results sever tells you that the results are bad. Fun stuff! :awe:
 
Back
Top