• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Intel "Haswell" Speculation thread

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
That is smart of AMD, to pursue support for both and on a timeline that bests Intel, but with the reality of lag between hardware availability and software adopting the new ISA extensions (think tesselation in DX11 for example) there is little benefit to leverage on AMD's behalf from this "win".

There may be a benefit of people optimizing code for AMD processors, since it is the only hardware that uses that ISA extension natively for a time. While I doubt many will do so, it is a possibility.

I haven't coded for years now, and when I did it was on TI and Hitachi Microcontrollers, so I am probably talking out of turn here. That was just my reasoning for why you would want to get hardware out first. (I know when I was developing electric power steering, I used vehicles that were currently available that had most of the specs I needed to do early development work. I could see others doing the same here on AMD processors.)
 
Well I for one am tired of my desktop pumping out heat like its a space heater. And mine is only pulling 200 watts from the wall. It is amazing how much heat you get from 200 watts! If haswell can give me the same cpu and gpu performance as my current i5-750 3.4GHz / HD5770, for only 15 watts, then I'd be happy to replace it with a tablet. I seriously doubt it is going to be able to do that. Luckily a 5770 is a bit beyond what I need, so there is a chance that whatever fits into a tablet form will still be enough.
 
You guys are forgetting.. Haswell will have 6 core 12 threads models for desktop.

I see more then just 5 percent IPC , more like 15 to 25 percent... and throw in quad chanell 2400Mhz memory and a SSD ,, the haswell will pownz us all.
 
You guys are forgetting.. Haswell will have 6 core 12 threads models for desktop.

I see more then just 5 percent IPC , more like 15 to 25 percent... and throw in quad chanell 2400Mhz memory and a SSD ,, the haswell will pownz us all.

Why does everyone keep saying this? Intel's slides clearly show 2/4 cores for haswell.
 
As long as they keep improving performance end efficiency with higher IPC, clocks, and lower power comsumption each gen I dont see need to release hexa-cores for mainstream users. Higher single-thread performance translates to more performance more often than +cores in client apps.
 
As long as they keep improving performance end efficiency with higher IPC, clocks, and lower power comsumption each gen I dont see need to release hexa-cores for mainstream users. Higher single-thread performance translates to more performance more often than +cores in client apps.

Exactly my thoughts.
 
Is anyone else as excited about this new architecture as I am? Even if the performance only goes up by a small amount, it will introduce and support pretty much all the standards on the road map for the foreseeable future. I can't think of a better reason to upgrade, as it will likely last you many many years before retiring.
 
Is anyone else as excited about this new architecture as I am? Even if the performance only goes up by a small amount, it will introduce and support pretty much all the standards on the road map for the foreseeable future. I can't think of a better reason to upgrade, as it will likely last you many many years before retiring.

Pardon my ignorance, but could you explain what you mean by "it will introduce and support pretty much all the standards on the road map for the foreseeable future"?

Also, don't most processors that have come out in the last few years fall under the "it will likely last you many many years before retiring" category?
 
Pardon my ignorance, but could you explain what you mean by "it will introduce and support pretty much all the standards on the road map for the foreseeable future"?

Also, don't most processors that have come out in the last few years fall under the "it will likely last you many many years before retiring" category?

Google AVX2...
 
I am absolutely looking forward to Haswell. My last processor was an FX-55, so I imagine this will be like going from dial-up to broadband....lol

The best part is I plan on building a new gaming/multimedia PC with my tax returns and reports indicate that is exactly when Haswell is being released.
 
It looks like the GT3 will be only available for Mobile CPUs. I am concerned that Intel is shifting focus to mobile and Xeon cpus while artificially limiting the capabilities of the traditional desktop cpus. We are already seeing this with overclocking and now with graphics as well

http://www.cpu-world.com/news_2012/2012071001_Launch_schedule_of_Intel_Haswell_processors.html


Wow that sucks. The intel GPU works great in Linux. I have a few workstations based on the 3770K and built in GPU running 3.5 kernel.

I would love to see GT3 in the K series chip they release in 2013
 
Wow that sucks. The intel GPU works great in Linux. I have a few workstations based on the 3770K and built in GPU running 3.5 kernel.

I would love to see GT3 in the K series chip they release in 2013

I would love for them to rip the IGP far, far away from my CPU's...
 
Does anyone know or have any clue on improvements that 1150 mobo will have over z77?

Haswell support

cheaper (theoretically)

MAYBE true triple monitor support for the IGP

otherwise everything stays where it is: USB3, PCIe 3.0, SATA3, DDR3, etc
 
Last edited:
Just saw on Wikipedia that Haswell is expected to have an L2 Trace Cache. If it does, what exactly does this entail?

I know NetBurst employed an L1 Trace, but I don't really know how that affected performance.
 
Does anyone know or have any clue on improvements that 1150 mobo will have over z77?

The 8 series chipset aint changed much (And why would it?). You basicly get more SATA6 ports and more USB3.

Else a big change is the ondie VRM. So you will notice motherboards looking quite different.
 
Just saw on Wikipedia that Haswell is expected to have an L2 Trace Cache. If it does, what exactly does this entail?

I know NetBurst employed an L1 Trace, but I don't really know how that affected performance.
That expectation seems a bit sketchy. First of all Sandy/Ivy Bridge doesn't have a trace cache but a uop cache, and going back to a NetBurst-style trace cache for Haswell is extremely unlikely.

Even if we assume they meant an L2 uop cache, it still doesn't make a whole lot of sense. Uops are relatively big, and the purpose of the uop cache is to lower the branch misprediction latency and save on decoding power. Missing the L1 uop cache and having to access an L2 uop cache adds latency, so there's probably no gain there. And the L1 uop cache already has a hit rate of 80% so an L2 uop cache wouldn't significantly lower the decoder activity either. Besides, the uop cache is really more like an L0 instruction cache. So I don't see where another cache level would fit in.

The only thing I can imagine is that the L1 instruction cache would contain predecoding information. It would lower the branch misprediction latency by a bit and potentially save some power. More importantly this could also allow them to perform macro-op fusion between a mov and an arithmetic instruction. The predecoder could check for operand dependencies in a power efficient way since it wouldn't be pressed for time if it sits between the L2 and L1 cache. It also saves uop cache space since they are non-destructive already.

This would improve Haswell's scalar IPC by several percent.
 
That expectation seems a bit sketchy. First of all Sandy/Ivy Bridge doesn't have a trace cache but a uop cache, and going back to a NetBurst-style trace cache for Haswell is extremely unlikely.

Even if we assume they meant an L2 uop cache, it still doesn't make a whole lot of sense. Uops are relatively big, and the purpose of the uop cache is to lower the branch misprediction latency and save on decoding power. Missing the L1 uop cache and having to access an L2 uop cache adds latency, so there's probably no gain there. And the L1 uop cache already has a hit rate of 80% so an L2 uop cache wouldn't significantly lower the decoder activity either. Besides, the uop cache is really more like an L0 instruction cache. So I don't see where another cache level would fit in.

The only thing I can imagine is that the L1 instruction cache would contain predecoding information. It would lower the branch misprediction latency by a bit and potentially save some power. More importantly this could also allow them to perform macro-op fusion between a mov and an arithmetic instruction. The predecoder could check for operand dependencies in a power efficient way since it wouldn't be pressed for time if it sits between the L2 and L1 cache. It also saves uop cache space since they are non-destructive already.

This would improve Haswell's scalar IPC by several percent.

You should go and work at Intel. :biggrin:

assuming you don't already work there.
 
I am absolutely looking forward to Haswell. My last processor was an FX-55, so I imagine this will be like going from dial-up to broadband....lol

The best part is I plan on building a new gaming/multimedia PC with my tax returns and reports indicate that is exactly when Haswell is being released.

Well, I'm sending this P4(478)Prescott out to pasture when Haswell comes out. Unless they decide to keep the paste under the IHS like Ivy. Then I may decide to vote with my dollar and support the underdog. Say what you want about Prescotts... at least they have a soldered lid. Lasted a good long time of faithful service over the years of daily use.
 
Back
Top