Should I wait for the intel haswell processor to build a gaming rig?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

cytg111

Lifer
Mar 17, 2008
26,169
15,590
136
Also, some of the Haswell instructions are already in the BD CPUs. So once Intel adopts them in Haswell, there will be nothing stopping developers from using it. I expect to see gains from Haswell's new instruction within a year of release for some applications. Majority of applications within 2-3 years.

- That still makes it 3-4 years+ into the future. On top of what BenchPress said

It is vital to realize that unlike previous SIMD extensions, AVX2 requires a mere recompilation to benefit from it. That's because it's the first extension that has vector equivalents of every major scalar instruction. Hence every code loop with independent iterations can be easily vectorized.

- I have no real understanding of what that means in terms of being totally different than other SIMD extentions (i am a programmer, high and low, but never delt with graphics or simd's), why it is totally different to have equivalents of every major scalar instruction or not, either way in my mind it would be a 'simple' matter of recompilation.. and thus a patch for your favorite game. What minimum architecture are games compiled for these days? I assume we've moved past 80386. The theoretical throughput for simds/3dnow/whatever have always "blown our mind" .. I have yet to get blown(meeh).

But yes, Haswell does come with a lot of small mmm's that may all add up to Umpf! .. Remains to be seen, 3-4-5 years down the line.
 

scannall

Golden Member
Jan 1, 2012
1,960
1,678
136
For me anyway, I am skipping the 'Bridges' entirely. Yes, they are great but so far my 4 Ghz i7-920 rips through everything I throw at it. By the time Haswell arrives I'll be ready for something else.
 

Mars999

Senior member
Jan 12, 2007
304
0
0
I wanted to build a gaming rig within the next 6 months but I have read that the intel haswell processor will be coming out in the spring of 2013. Now I figure that it would be worth it to wait to build my gaming rig until the haswell processor comes out. That way I would not have to upgrade my motherboard or other components if I decided to upgrade to a haswell processor in the future. So, as of right now, would it be worth it to wait for the haswell processor to come out and then build my gaming rig? Also, how much more powerful will the haswell processor be compared to the ivy bridge processors?

IMO get IB now and a Z77 MB. Forget Haswell and wait until Broadwell 14nm or Skylake hits in a few years... No need to worry IB, SB are FAST enough for games for quite a few years...
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
- That still makes it 3-4 years+ into the future. On top of what BenchPress said
For the record, applications that made use of SSE4 were available on the day Nehalem launched. The only problem was, SSE4 only offered an improvement for a narrow field of applications.

AVX2 on the other hand is incredibly generic. It widens every integer SIMD instruction from 128-bit to 256-bit, and adds FMA for twice the peak floating-point performance. So each and every application that used SIMD before, will have access to twice the throughput with Haswell!

But that's not all. Thanks to the addition of gather and vector-vector shift instructions, a slew of applications which previously couldn't benefit from SIMD, suddenly become prime candidates...
- I have no real understanding of what that means in terms of being totally different than other SIMD extentions (i am a programmer, high and low, but never delt with graphics or simd's), why it is totally different to have equivalents of every major scalar instruction or not...
Thanks for asking. It used to be that to benefit from SIMD, you needed code that was explicitly using vector math. For instance to move an object in 3D space you'd add a 3-component displacement vector to a 3-component position vector. SSE uses 4 x 32-bit component vector registers, so you're not making use of the fourth component in this example, but still, it's faster than adding each component separately.

AVX widens the vectors to 256-bit, so that's 8 x 32-bit. Now obviously this seems like a big waste since most vector code will consist of ~3 components. But the trick is to not process one vector at a time, but eight at a time! Each AVX register would hold one component of eight different vectors. So you can move eight 3D object positions with three vector additions. For this example, the AVX code would be 2.7x faster.

The caveat is that you need to get the original eight 3-component vectors into three 8-component vectors. This is where AVX2's gather support comes in super handy. It can load eight 32-bit values from non-consecutive locations in memory.

Now, the really exciting part is that this can be be done for every code loop with independent iterations. Eight iterations can run simultaneously, each using one component of the AVX registers. And this is where the vector-vector shift instruction of AVX2 is critical. Obviously a lot of code loops contain shift instructions, but since the vectorized form would be using shift values from eight different iterations, they need to be independent. Prior to AVX2 such an instruction was not available and the shifts would have to be performed one component at a time. With AVX2, that can be done eight times faster.
either way in my mind it would be a 'simple' matter of recompilation..
Compilers have been pretty bad at auto-vectorizing code. And that's precisely because all previous SIMD instruction sets lacked things like gather and vector-vector shift. With AVX2 this changes radically because vectorizing a loop becomes straightforward when you have vector equivalents of every scalar instruction. The code nearly looks the same it's just skipping ahead eight iterations at a time.
What minimum architecture are games compiled for these days? I assume we've moved past 80386.
Most games nowadays demand SSE2 as a bare minimum.
The theoretical throughput for simds/3dnow/whatever have always "blown our mind" .. I have yet to get blown(meeh).
Again that's because the current SIMD extensions have only been applicable in narrow fields, and even when they're applicable the full potential is left unused due to a few missing instructions. With AVX2 Intel will finally offer everything that's required to make it useful for the vast majority of software.
But yes, Haswell does come with a lot of small mmm's that may all add up to Umpf! .. Remains to be seen, 3-4-5 years down the line.
It won't take that long. Developers of high-performance applications would be insane to leave the potential of AVX2 and TSX unused for years.
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
No need to worry IB, SB are FAST enough for games for quite a few years...
You're making the assumption that games won't become more demanding soon. But this has only been true so far because console hardware hasn't changed in the last six years. Next-generation games however will target next-generation consoles which have much more powerful hardware.

So which games do you want to run smoothly, the old ones, for which your old setup suffices, or the new ones, which will require Haswell to be able to enjoy them in full glory?
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Next-generation games however will target next-generation consoles which have much more powerful hardware.

I think its also not 100% correct to assume that consoles are the sole reason performance demands have been decreasing with games. Look at Blizzard games for example, they are super popular, yet they focus solely on PC. Same with Valve. They don't make the most demanding games in the world, but they are damn good games.

The real reason focus on intense graphics have decreased is because the game developers finally got it in their heads that they don't need to pursue the best looking game to make a best playable one(and in turn best selling). That is also why they rehash previously popular games like MMOs, Call of Duty, Halo, over and over.

Why spend a fortune on new development with new engine with high compute demand(which limits customer base) when they could just modify what's almost perfect/proven to sell?
Originally Posted by dave2849
I wanted to build a gaming rig within the next 6 months but I have read that the intel haswell processor will be coming out in the spring of 2013.
If you are going to get one, get a new architecture chip, like Haswell. Broadwell will likely be a small enhancement(plus waiting additional year to buy a system), kinda like Ivy Bridge.
 
Last edited:

BenchPress

Senior member
Nov 8, 2011
392
0
0
I think its also not 100% correct to assume that consoles are the sole reason performance demands have been decreasing with games.
Let's get this straight first: The people asking whether they should get Ivy Bridge or wait for Haswell are not the kind of people seeking to build a system for casual gaming. They want to understand how these architectures will run current and future demanding games.

So while I agree that a popular game isn't necessarily a demanding game, I do believe that the main reason why supposedly cutting-edge games aren't all that taxing on the CPU these days either is because the current generation of consoles is built on six year old hardware. And that's holding back every other game category as well.

Remember when everyone used to say 1 GHz is enough for all your needs? But soon after the PlayStation 2 and Xbox were launched, that proved to be barely sufficient. Likewise the Pentium 4 and Athlon 64 were considered great gaming chips, until the PlayStation 3 and Xbox 360 appeared and suddenly you really had to have a Core 2 or Phenom II for serious gaming...
Why spend a fortune on new development with new engine with high compute demand(which limits customer base) when they could just modify what's almost perfect/proven to sell?
A new engine doesn't cost a fortune. Most of the money for today's games goes to the content. And being able to target faster hardware can actually make that cheaper. For instance instead of modelling how a building collapses, you could use a physics engine. It allows more realistic interactions too. So people's expectations are set by how far the technology has advanced. Once one game has a certain feature, others need it too to be considered any good. But currently the hardware has become a limiting factor.

Game development companies really want to set themselves apart from the rest, and that often means adding features which push the hardware to the limits. Currently that means pushing console hardware to its limits, while high-end PC hardware has performance to spare. But the next-gen technology is being developed as we speak, so you're better off with a CPU that will be capable of running such games in all their glory instead of having to disable a feature or two or settle for low framerates.
 

beginner99

Diamond Member
Jun 2, 2009
5,318
1,763
136
The reason to wait for me but rather be the fact that GPUs are pretty much overpriced right now. However there is no guarantee or hint whatsoever that they will be cheaper a year from now when haswell is released.
 

cytg111

Lifer
Mar 17, 2008
26,169
15,590
136
Thanks for asking. It used to be that to benefit from SIMD, you needed code that was explicitly using vector math. For instance to move an object in 3D space you'd add a 3-component displacement vector to a 3-component position vector. SSE uses 4 x 32-bit component vector registers, so you're not making use of the fourth component in this example, but still, it's faster than adding each component separately.

Right, i took linear algebra back in the day, I get the general idea about simd's, but i still dont see how one recompilation is different from the other.. Another thing hits me, isnt this what they've struggled with in Itanium since day one? I may be fumbling here, but didnt Itanium divert resources away from ooo design to rely on heavily vectorized code? Seeing how haswell is an ooo design, isnt this like having your cake and eat it too? (there is no cake.. or at least spoon).


edit :
Compilers have been pretty bad at auto-vectorizing code. And that's precisely because all previous SIMD instruction sets lacked things like gather and vector-vector shift. With AVX2 this changes radically because vectorizing a loop becomes straightforward when you have vector equivalents of every scalar instruction.

You covered some of that here ... Allrighty, I am going from highly sceptical to just sceptical, you provide sound reasoning. There is still major holes in my understanding-verse (like why does itanic still continue to sink if this concept is so very bad, but that may be whole other kind of cake... maybe haswell is just the best of both worlds?)
 

OBLAMA2009

Diamond Member
Apr 17, 2008
6,574
3
0
if youre going to wait for something beyond ivy bridge youre going to be waiting a while. intel doesnt really have any competition at this point and they really dont have much incentive to come out with new stuff on any schedule
 

Makaveli

Diamond Member
Feb 8, 2002
4,976
1,571
136
if youre going to wait for something beyond ivy bridge youre going to be waiting a while. intel doesnt really have any competition at this point and they really dont have much incentive to come out with new stuff on any schedule

You may see a delay here or there but it will come out.

If there is one thing more important to intel than AMD its their shareholders that demand they keep high profit margins.
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
Right, i took linear algebra back in the day, I get the general idea about simd's, but i still dont see how one recompilation is different from the other..
Compiler technology is a complex field but I'll try to summarize the auto-vectorization issue: With the old approach, compilers had to look for identical scalar operations performed on adjacent data, and try to put those into vectors. This is a complex task, even for humans, and you want to avoid making one part faster while making other parts slower due to having to get data in and out of the vector format. With AVX2, the compiler doesn't have to look for identical operations any more. All it needs is a loop with independent iteration, which are very common. Every scalar operation in the original loop can then be vectorized by putting the data from multiple iterations into the AVX2 vectors. You need gather support to efficiently move the data into the vectors, and you need a vector equivalent of every scalar instruction to do this. So it's no coincidence that AVX2 adds gather and vector-vector shift. It's a paradigm shift in vector processing for consumer CPUs.
Another thing hits me, isnt this what they've struggled with in Itanium since day one?
No. Itanium relies on VLIW technology. It packs multiple scalar instructions together into bundles and executes them as one in a single cycle. The problem with this is twofold: Only very specific combinations of instructions can be packed together, so you often have unused slots, and if one instruction in a bundle stalls, they all stall.

VLIW is just a statically scheduled form of superscalar execution (executing multiple instructions per cycle). Haswell, and all its predecessors up till the Pentium Pro, feature superscalar out-of-order execution instead. It's a dynamic way of executing multiple instructions per cycle.

That said, SIMD is orthogonal to the choice between VLIW and out-of-order execution. In fact Itanium also features SIMD instructions, but they're only 2 x 32-bit wide! So with 256-bit AVX2, Haswell will totally obliterate Itanium.
Allrighty, I am going from highly sceptical to just sceptical, you provide sound reasoning. There is still major holes in my understanding-verse (like why does itanic still continue to sink if this concept is so very bad, but that may be whole other kind of cake... maybe haswell is just the best of both worlds?)
Haswell combines the very strong 3-way arithmetic out-of-order execution that first appeared with Core 2, with 8-way SIMD featuring parallel versions of every scalar instruction. So yeah, it's the best of both worlds, and then some.

It doesn't suffer from the weaknesses of Itanium because VLIW is about static superscalar execution, not SIMD. Haswell combines superior dynamic superscalar execution with a very wide and very complete SIMD instruction set.
 

dinker99

Member
Feb 18, 2012
82
0
0
if youre going to wait for something beyond ivy bridge youre going to be waiting a while. intel doesnt really have any competition at this point and they really dont have much incentive to come out with new stuff on any schedule

Plus, the direction they are moving with Haswell is the laptop market - better IGP and lower power.
 

cytg111

Lifer
Mar 17, 2008
26,169
15,590
136
It doesn't suffer from the weaknesses of Itanium because VLIW is about static superscalar execution, not SIMD. Haswell combines superior dynamic superscalar execution with a very wide and very complete SIMD instruction set.

All that sounds good and then some .. Thanks for taking the time :)
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
if youre going to wait for something beyond ivy bridge youre going to be waiting a while. intel doesnt really have any competition at this point and they really dont have much incentive to come out with new stuff on any schedule
It would be a bad mistake to underestimate AMD at this point. When you're behind, you have a clear goal and you benefit from the 'slipstream'. AMD knows the strengths of Intel's architecture and the weaknesses in their own. So they know exactly what to do, and they know it's feasible. Intel on the other hand has no reference point of what they could improve. AVX2 and TSX hold huge potential, but I'm not expecting any significant changes to things that affect IPC since that would be a risk they can't afford to take (higher IPC always comes with a compromise).

So I wouldn't be surprised if AMD was able to catch up with Intel fairly rapidly in the next couple years. They just have to focus on well-known specific bottlenecks. Also note that AMD's FMA4 and XOP extensions already feature some of AVX2's functionality, but the execution units are half width. Doubling them is relatively easy though. And they've also experimented with synchronization before. So it's not unimaginable for AMD to create a good competitor against Haswell!
 

tuffluck

Member
Mar 20, 2010
115
1
81
if you have a microcenter near you, check out their specials on sandy bridge. i bought my i5-2500k and gigabyte UD3H motherboard for $295 total together, plus another $10 MIR on the board. if you can find that kind of deal, that's too cheap to wait around for a new gen of processors to come out. the deal is in-store only however.
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
Plus, the direction they are moving with Haswell is the laptop market - better IGP and lower power.
That is incorrect. Featuring a better IGP (on some models), and lowering the TDP (on some models), doesn't mean the whole design is geared towards the laptop market.

Sandy Bridge ranges from 130 Watt all the way down to 17 Watt. That's a single micro-architecture! It's achieved simply by varying the number of cores, and the clock frequency.

Haswell will likely bring us high-end 130 Watt parts with more cores and higher frequencies than Sandy Bridge. But that means that dual-core low-frequency parts will consume less than 17 Watt. So this goal is achieved without actually having to specifically design for very low power consumption to the point where the high-end is crippled.

Creating high performance consumer chips is what made Intel big. And they're not going to abandon that market just because there are emerging markets which require ultra-low power consumption. They enter that new market gradually with every semiconductor process advancement, without having to make compromises for the markets they already dominate.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
You really think so? I dont. For example, FMA (part of Haswell), is already built into most gaming code because all GPU can already use it (that part of what makes GPUs so fast). So extending that to the CPU part of the game will not be that difficult. That is just one example.

Also, some of the Haswell instructions are already in the BD CPUs. So once Intel adopts them in Haswell, there will be nothing stopping developers from using it. I expect to see gains from Haswell's new instruction within a year of release for some applications. Majority of applications within 2-3 years.

If I have to wait 2-3 years to benefit from something, then it's too late for me as I've already given it to my mom. Hopefully, solitaire 3D will feature avx2 by then, because that's about all my mom uses her computer for.
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
If I have to wait 2-3 years to benefit from something, then it's too late for me as I've already given it to my mom. Hopefully, solitaire 3D will feature avx2 by then, because that's about all my mom uses her computer for.
Software making use of AVX2 will be ready on the day Haswell launches.
 

cytg111

Lifer
Mar 17, 2008
26,169
15,590
136
Software making use of AVX2 will be ready on the day Haswell launches.

Hey man, mind me picking your brain?

My, problary very rough, understanding is that todays GPU's are indeed some (huge)level of SIMD devices .. So question is, the sort of work that benefits som SIMD, isnt that allready being put towards the descrete gfx (if any)? I understand that loops are fundamental to any code, but still, it could sound like Intel is approaching the SOC idea from a CPU angle, while AMD is trying to actualle merge the cpu and gpu alltogether. (is haswell++ one reason larrabee got cancelled ?) ..

.... likes to spectualte
 

cytg111

Lifer
Mar 17, 2008
26,169
15,590
136
....If I have to wait 2-3 years to benefit from something...

if everything from 'time minus 2 years' to those "2-3 years" is at best a sidegrade (i consider SB and IB a sidegrade compared to my sig), those years are worth waiting for. Haswell might be a computational revolution, but we will problary see broadwell before wide adoptation( i know that can be debated ). Untill then, personally, I am stalled CPU upgrade-wise. I'll get a shitload of SSD's and whatnot, but CPU is just fine.
 

Makaveli

Diamond Member
Feb 8, 2002
4,976
1,571
136
if everything from 'time minus 2 years' to those "2-3 years" is at best a sidegrade (i consider SB and IB a sidegrade compared to my sig), those years are worth waiting for. Haswell might be a computational revolution, but we will problary see broadwell before wide adoptation( i know that can be debated ). Untill then, personally, I am stalled CPU upgrade-wise. I'll get a shitload of SSD's and whatnot, but CPU is just fine.

I agree with your general statement but the sidegrade part.

While the jump from Nehalem to SB was only about 10-15% IPC and abit higher clock speed the improvement from a Core2Quad should be quite abit more than a sidegrade.

Going Core2quad to SB or IB should be a large improvement. However it is worth a complete rebuild to you is the question and you seem to be happy with the performance with your current rig so I would wait.

This is the same reason I went from a 920 to a 970 and will be waiting for Haswell.
 

cytg111

Lifer
Mar 17, 2008
26,169
15,590
136
I agree with your general statement but the sidegrade part.

While the jump from Nehalem to SB was only about 10-15% IPC and abit higher clock speed the improvement from a Core2Quad should be quite abit more than a sidegrade.

Going Core2quad to SB or IB should be a large improvement. However it is worth a complete rebuild to you is the question and you seem to be happy with the performance with your current rig so I would wait.

This is the same reason I went from a 920 to a 970 and will be waiting for Haswell.

Indeed, superfast it becomes "what do you do with it?" .. atm i am playing BF3 on high with no hiccups whatsover.. So it is fine as a gaming rig. I also code, and when it is worst (and you gotta hate bloatet shit once you've met it) we're talking 50.000 small source files all needing to get inspected for what not.. A SSD is a godsend here, even a factor 2 CPU upgrade will problary not net me another 20% compile reduction right there. And when doing lowlevel, stepping away in ie. olly, a p4 or even p3 would do the job without hiccups..

And i doubt there is many ppl, gamers, coders, whatever in existence who reall neeed SB+!! .. they may need superior 4k random reads from their harddrives very very sooner than anything from their cpu.
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
My, problary very rough, understanding is that todays GPU's are indeed some (huge)level of SIMD devices .. So question is, the sort of work that benefits som SIMD, isnt that allready being put towards the descrete gfx (if any)? I understand that loops are fundamental to any code, but still, it could sound like Intel is approaching the SOC idea from a CPU angle, while AMD is trying to actualle merge the cpu and gpu alltogether. (is haswell++ one reason larrabee got cancelled ?) ..
Indeed there are similarities with GPGPU computing. However, the GPU is far from ideal for generic throughput computing:
luxmark.gif

Note how the newly crowned king of GPUs actually loses against a quad-core CPU. What makes it even more embarassing is that the GTX 680 has a peak throughput of 3090 GFLOPS, while the i7-3820 can merely do 115 GFLOPS (when only using SSE like in this benchmark). Also keep in mind that this CPU consumes half the power and costs half.

Clearly an architecture that is optimized for graphics isn't the best choice for generic computing. The GTX 680 isn't going to seduce game developers to use GPGPU technology to deliver a new gameplay experience. It's a step backward on this front. Also keep in mind that any workload other than graphics which you run on the GPU, will cost you graphics performance.

So the (underutilized) CPU is far more attractive for running generic computationally intensive workloads. It uses the available computing power much more efficiently. And Haswell will be particularly interesting because it offers four times higher GFLOPS per core, plus gather instruction support (which was borrowed from Larrabee).

There are many reasons why GPUs suck at generic computing (pardon my French). For starters, they don't have out-of-order execution, so when a thread has to access memory, it can stall for hundred of clock cycles. That's not a big problem for graphics since it can process millions of other pixels by switching to another thread, but for game logic that expects an answer ASAP it impedes everything. Also, GPUs don't have large caches to avoid having to go to memory in the first place. Furthermore, there's a round-trip delay from sending a task to the GPU and reading back the result, and you have to go through several layers of driver software. With AVX2 on the CPU, the input and output are right where you want them.

So it's best to just let GPUs do what they do best: graphics. Making them do anything else requires big compromises. With the CPU on the other hand it takes surprisingly little changes to turn it into a high-throughput device, without sacrificing any of its qualities. The result, is Haswell.
 

amenx

Diamond Member
Dec 17, 2004
4,522
2,857
136
Anyone considering an Intel CPU upgrade should be mindful of whether its a tick or tock. A tick is a minor improvement, a die shrink that doesnt usually bring much of a performance gain, but will have reduced TDP. A tock is major, it is a new arch that brings more significant improvements and performance gains. IB is a tick, Haswell is a tock.

If you have an older CPU (C2D or C2Q or earlier), an IB based rig will be a nice improvement. If you have SB or even Nehalem then it would make sense to stick it out until Haswell. If you have SB and go for IB, then chances are you are probably just bored and like to play around with hardware and upgrade CPUs every year.