Intel "Haswell" Speculation thread

Page 19 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
Also mainstream desktop 4C GT2 @ 65W TDP

I'm guessing that's the S model. Intel sells 65W and 45W 4C Ivies right now, so it wouldn't be any different. The sheet does make it look like only Xeons are getting the 95W parts though.

That fruit company will be happy with the 2C GT3 Ultrabook.
 
Last edited:

BenchPress

Senior member
Nov 8, 2011
392
0
0
While all these new instructions are very nice, it's probably going to be years before they are applied to mainstream software. Look what happened when SSE came out, took about 5 years.

Anyone know how much software utilizes AVX which has been out since the release of SNB? Hell, even Windows didn't support it until W7 SP1 and there will still be a lot of older OS's and CPU's that will not support AVX for quite a while. So IMHO there seems little incentive for software houses to produce software that incorporates the newer instructions until it is mainstream, except maybe in the case of a few niche programs.

IOW while the technology is exciting, early adoption just doesn't seem that appealing.
AVX2 can't be compared to anything that has come before. It will be the first SIMD instruction set extension that is truly suitable for the SPMD programming model. This is the same programming model used by GPUs. It allows scalar code, which is easy to program, to be vectorized in a straightforward manner. AVX2 will run data-parallel code up to eight times faster.

So unlike previous extensions, the gain is huge while the effort is low. What will also help speed up the adoption is that the specification has been released over a year ago, and all major compilers already have support for it. And since AVX already introduced the 256-bit registers, operating systems don't have to do anything more for AVX2.
 

Mars999

Senior member
Jan 12, 2007
304
0
0
SO my question is this... I have a 2600k now, would moving to a Haswell 4700k or whatever they call it the $330 replacement CPU be worth it over mine now? I see the 3770k is only ~ 10-15% faster than my 2600K at best. And in games not much difference.
 

Dufus

Senior member
Sep 20, 2010
675
119
101
Mars999 IMO stick with your 2600k for a few years, no need to rush in and buy Haswell unless you have some specialized software you need to run on it or are a hardware junky that likes to play with the latest hardware, sit back and see how it goes ;)

Benchpress, any windows OS before W7 SP1 does not support saving AVX registers after a context switch and therefore cannot run any AVX code successfully. AVX also does not play well with HTT if 2 threads running on 2 logical cores map to the same physical core so seems to me it's not as simple as it might sound. I really see little incentive for the majority of software houses to provide specialized code to run AVX2 code until there is an appreciable users market. How long have 64-bit processors been out and how long has it taken software houses to provide 64-bit software even though their compilers may have been able to produce 64-bit code from before 64-bit processors were released. ;) Why in a lot of cases have software houses produced 32-bit software where 64-bit might be more efficient, because 32-bit still gets the job done and works on more systems. If it turns out I have presumed wrongly regarding AVX2 that would be great but I don't have much hope for it being otherwise.

While these new instructions are powerful IMHO it will be a long time before they become mainstream so Mars999 just relax and see how things develop. For myself I have an oldish OC'd C2D laptop that will probably be due for replacing next year and in that case Haswell should make sense as I like to be able to get at least 5 years use out of it before upgrading.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
Haswell detailed & release dates:

http://wccftech.com/intel-haswell-d...-launching-q2-2013-core-gpu-details-revealed/

And some pricing info:

http://www.fudzilla.com/home/item/28523-haswell-starts-from-$184

Interesting that they will produce a mobile 2C GT3 chip @ 15W TDP. I thought the die area and hence TDP for such a chip would be quite large, considering the GT3?

Also mainstream desktop 4C GT2 @ 65W TDP (I assume they correspond to IB 3570/3770K?). Somehow they have been able to lower the TDP from 77W->65W compare to IB without a node shrink? :confused:

It was well known that the TDP would go up with IB and then down again with Hasswell.
 

WhoBeDaPlaya

Diamond Member
Sep 15, 2000
7,414
401
126
SO my question is this... I have a 2600k now, would moving to a Haswell 4700k or whatever they call it the $330 replacement CPU be worth it over mine now? I see the 3770k is only ~ 10-15% faster than my 2600K at best. And in games not much difference.
Isn't the difference more like 4% max?
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,707
182
106
It was well known that the TDP would go up with IB and then down again with Hasswell.

The TDP didn't go up with IB, it went down (from 95W->77W for 2500K/2600K vs 3570K/3770K). And now the TDP seems to go down even further with Haswell from 77W->65W. But this time it is not due to a node shrink, so it is kind of impressive if it really is reduced as much as stated on the webpage I linked to.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
It was well known that the TDP would go up with IB and then down again with Hasswell.

TDP is only going down with the Ultrabook "Ultra" Haswell. The rest are staying same, or going up. If you see the first link, you'll see its available up to 95W.

The thing is though, they are talking about embedded products. Meaning the SKUs are not exactly representative of PC. I've seen somewhere that desktop Haswell TDP goes up to 105W.
 

Dufus

Senior member
Sep 20, 2010
675
119
101
Sounds reasonable IntelUser2000, cpu's TDP seems designed around a power form factor. For instance laptop mid range are designed typically for a cpu with 35W TDP so rather than lower cpu TDP, higher clocks or more cores can be incorporated into cpu's that are more power efficient than earlier arch' for the same TDP. Of course with c-states and IST the cpu can run at lower power levels when required.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
92
91
I hope intel places an ssd controller on the die. I was calling for this 5 years ago. Who listened? Not amd, not intel. The ARM SOC designers listened. lol. When I see this sort of thing happen over and over it is easy to predict the trends.

There is no reason to do this. The chipset to CPU link is over multiple PCI-E lanes that are plenty fast enough to handle SSDs. If you need higher throughput, get a PCI-E SSD card. There is zero benefit in adding a restrictive interface on extremely valuable CPU real estate when PCI-E <-> SATA through the chipset is still perfectly fine.
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
that's because with today's CPUs it's quite hard for developers to juggle many threads (because these CPUs only have very primitive and slow means for synchronizing between threads).
I am a developer myself and I can't tell you flat out that what you say is only half true. It is indeed hard to juggle many threads but it is not because of lack of hardware support. It's just hard to juggle many threads period. What really needs to happen is better language support.
 

Ajay

Lifer
Jan 8, 2001
15,332
7,792
136
I am a developer myself and I can't tell you flat out that what you say is only half true. It is indeed hard to juggle many threads but it is not because of lack of hardware support. It's just hard to juggle many threads period. What really needs to happen is better language support.

I find debugging to be the worst part, especially in a multi-tiered system. At least IDE's are getting better at adding debugging support for threads. The are better languages for multi-threaded programming than C++, but who want's to learn OCCAM :|
 

Borealis7

Platinum Member
Oct 19, 2006
2,914
205
106
if Intel get their act together again on the thermals then an unlocked Haswell can be worth it for people with Sandy Bridges.
 

khon

Golden Member
Jun 8, 2010
1,319
124
106
I had actually planned to buy an IVB laptop, but after seeing all the thermal problems, I decided to skip it.

So for me the main thing Haswell needs to improve is the temperature. I don't want a laptop that overheats in seconds, or makes the keyboard too hot to use.

If Haswell can provide similar CPU performance, better IGP and lower temperature, then I'm buying one.
 

DigDog

Lifer
Jun 3, 2011
13,444
2,084
126
well, since this is a speculation thread, i speculate the Haswell will have some decent thermal paste inside this time around.
 

dma0991

Platinum Member
Mar 17, 2011
2,723
1
0
I had actually planned to buy an IVB laptop, but after seeing all the thermal problems, I decided to skip it.

So for me the main thing Haswell needs to improve is the temperature. I don't want a laptop that overheats in seconds, or makes the keyboard too hot to use.

If Haswell can provide similar CPU performance, better IGP and lower temperature, then I'm buying one.
If the heat issue with IB is that it uses TIM under the IHS instead of solder, it won't affect laptops. All laptop CPUs are lidless, meaning that they do not come with an IHS like desktop CPUs do. So for most assembly of laptop CPUs, it will be a bare die and TIM of choice of the manufacturer. It has always been this way.
 

Borealis7

Platinum Member
Oct 19, 2006
2,914
205
106
right, but Haswell will have a MUCH better IGP for laptop gaming.
i know a guy in the Haswell IGP team at intel, he told me the IGP is between "much better" and "a hell of a lot better" than IB...but of course he'll say that ;)
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
AVX also does not play well with HTT if 2 threads running on 2 logical cores map to the same physical core so seems to me it's not as simple as it might sound.
I haven't heard of any problems with AVX and Hyper-Threading. Do you have a source for this or can you explain what you believe to be the issue?
I really see little incentive for the majority of software houses to provide specialized code to run AVX2 code until there is an appreciable users market. How long have 64-bit processors been out and how long has it taken software houses to provide 64-bit software...
That's not remotely comparable. 64-bit support offers the ability to address more than 4 GB of memory, for which there was no strong need for consumer applications back in 2003. Heck, there's still even very little need for it today. So even a decade later the adoption of 64-bit is slow and gradual, without much if any desire for a faster adoption.

In contrast, AVX2 offers a huge incentive by offering an eightfold vectorization of scalar code. It's practically the same incentive as GPGPU, except in a much more developer friendly form. So developers of applications that can use the extra performance will definitely adopt it sooner rather than later. I'm pretty sure that Intel is handing out engineering samples to developers of certain multimedia applications and such so there will be a significant number of AVX2 accelerated applications on the day of Haswell's launch.
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
I am a developer myself and I can't tell you flat out that what you say is only half true. It is indeed hard to juggle many threads but it is not because of lack of hardware support. It's just hard to juggle many threads period. What really needs to happen is better language support.
Please connect the dots. For better language support you need fast transactional memory and lock elision. It's really no coincidence that these are exactly the features offered by Haswell's TSX. Intel has no doubt cooperated with the leading language designers and multi-core programming researchers to determine what's needed to get developers to use more cores.
 

IntelCeleron

Member
Dec 10, 2009
41
0
66
Looks like Haswell is integrating the voltage regulator, according to Fudzilla anyway.

The job of a component named FIVR (Fully integrated voltage regulator) is to integrate legacy power delivery onto the processor package and die. Current processors in the market are working via PLL voltage regulators and even thou we don&#8217;t understand this part enough to brag about it, we know that Fully integrated voltage regulator will enable designs with less components and save manufacturers a few cents / bucks.

Intel clams that getting the FIVR on die / package will greatly simplify a platform power design and that it can consolidate 5 platform voltage regulators to just one. The Haswell will now come with input voltage regulator that takes to FIVR and this should offer a better arch flexibility and finer grain on die processor delivery control.

http://www.fudzilla.com/home/item/2...tor?utm_source=twitterfeed&utm_medium=twitter
 

Dufus

Senior member
Sep 20, 2010
675
119
101
I haven't heard of any problems with AVX and Hyper-Threading. Do you have a source for this or can you explain what you believe to be the issue?
A quote from Intel when using Linpack, a very scalable benchmark with AVX.
Intel said:
Intel Optimized LINPACK Benchmark is threaded to effectively use multiple processors. So, in multi-processor systems, best performance will be obtained with Hyper-Threading technology turned off, which ensures that the operating system assigns threads to physical processors only.
While running 2 threads of Linpack on separate physical cores may yield a ~1.9x increase in processing throughput over just one thread running, running 2 threads with HTT (2 logical cores) that belong to the same physical core will result in running slower than the throughput of one thread. Performance degradation takes place. Linpack can be freely downloaded if you want to try it yourself.
 

jones377

Senior member
May 2, 2004
450
47
91
A quote from Intel when using Linpack, a very scalable benchmark with AVX. While running 2 threads of Linpack on separate physical cores may yield a ~1.9x increase in processing throughput over just one thread running, running 2 threads with HTT (2 logical cores) that belong to the same physical core will result in running slower than the throughput of one thread. Performance degradation takes place. Linpack can be freely downloaded if you want to try it yourself.

It has nothing to do with AVX. Nehalem has the same problem with Linpack/SSEx. SMT in this case degrades performance slightly.