Intel "Haswell" Speculation thread

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

denev2004

Member
Dec 3, 2011
105
1
0
After the Donanimhaber Bulldozer fiasco, the only thing we can say for sure about Haswell is that it will be made of silicon.:p

But what Iam interested about is the integration of GPGPU into mainstream apps. Could that iGPU be used for gp computing, Intel did announce Haswell would have multiple GPU cores.
AFAIK, Larrabee is the only GPGPU project that Intel is still kept working on
Well, but, I guess DX11-support requires some basic architecture which has GPGPU characteristic. Just don't hope it having high performance in GPGPU program.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
I hope intel places an ssd controller on the die. I was calling for this 5 years ago. Who listened? Not amd, not intel. The ARM SOC designers listened. lol. When I see this sort of thing happen over and over it is easy to predict the trends.
 

MrTeal

Diamond Member
Dec 7, 2003
3,569
1,698
136
ummmm....

I dont think so...

If you want to talk about ram speed... DDR3-2100 was possible on LGA1366.

If you ask me... the idiot at intel wanted a different look on intel a platform vs AMD because AMD started copying names off intel.

Are you the only one that knows what i mean? Did i not make sense?

Incase u guys didnt know... sandy-e has ram slots sandwitching the cpu.

Now tell me was there really truely a reason to stack the ram modules next to the cpu socket?
Is someone really going to tell me it made a revolutionized difference in how the cpu accepts ram?


no im talking about the physical ram layout of the board...

Do you know how tough that makes use having to cool stuff down? Also the limitations of the mosfets which can be fitted on the board?
True i know we never needed that many mosfets but... it makes the mosffet placement more staggard, and makes cooling it more difficult..

Your challenge in cooling is significantly easier than that faced by the engineers trying to bring 4 DDR3 memory channels out of the CPU to one side of the board with consistent trace impedance. If that decision had anything to do with a marketing guy at Intel wanting the platform to look different than AMD's everyone at Intel would need to be fired.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
I hope intel places an ssd controller on the die. I was calling for this 5 years ago. Who listened? Not amd, not intel. The ARM SOC designers listened. lol. When I see this sort of thing happen over and over it is easy to predict the trends.

How would an "ssd controller" differ from a SATA controller?
Or from eg. ASUS Express Gate SSD
And what would the benefit be...because eg. an on-die IGP just ticks me off.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
How would an "ssd controller" differ from a SATA controller?
Or from eg. ASUS Express Gate SSD
And what would the benefit be...because eg. an on-die IGP just ticks me off.

Having an on-die ssd controller means you could have NAND DIMMS installed in a NAND DIMM socket, giving us inexpensive acess to NAND. There would be no need for a sata controller, or enclosure, or all the marketing and related budgeting that goes into such products. A NAND DIMM would not cost much more than the market rate for NAND, just like DDR3 DIMMS do not cost much more than the market rate for the DDR3 chips themselves. In other words, we would be able to buy 64GB NAND DIMMS for about $50. It would cost less than $5 per motherboard to add two 8 channel NAND DIMM sockets.

We are rapidly approaching the reality of being able to use PCM or FRAM or MRAM or whatever new type of RAM to replace both flash and DDR. But in order to do that we must already have in place a controller that integrates the two types of memory as if they were one. The IMC on the cpu die is the obvious place this should occur. It is going to happen on an ARM SoC, of that I have no doubt. If intel/amd/microsoft does not see this then their "faster" cpus are going to get totally blindsided by the advantages from having memory unification. Especially microsoft, since has so much of its OS wrapped up in managing these two different types of memory.
 

Catalina588

Junior Member
Oct 15, 2010
2
0
0
I think we'll be positively surprised by Haswell's practical TDP benefits. Sandy Bridge-E (Core i7-3960X) will run 24x7 at stock voltage at between 4.1-4.2 GHz as the Turbo exceeds its envelope, throttles back as it cools, then charges ahead again. To get more sustained performance (at stock voltage), install a better water cooler.

Haswell will have significantly better thermals than Sandy Bridge, plus the 22nm process. So I expect K-class Haswell's to run well over 4.5 GHz sustained on air at stock voltage. Instantaneous Turbo will be pushing 5 GHz stock. To me, that's gonna be worth buying.
 

Ajay

Lifer
Jan 8, 2001
15,429
7,849
136
In truth, IPC has a specific meaning in computer science but that meaning is heavily extended in its application at the laymen level of discourse with which we all engage here.

We tend to refer to changes in clock-normalized benchmark performance as being tantamount to a change in IPC.

But no benchmark app is strictly a single instruction exectuted multiple times. Benchmarks represent a collection of instructions, and the performance is more akin a that of a weighted average of that basket of instructions.

And there are a lot of instructions to consider:

No one here really delves into the contents of the basket (is it 50 instructions? 25? 100? in the bench) nor the weighting (is FDIV called 50 times while MUL is called 10 times?).

So what do we mean when we say "15% IPC increase"? The spirit of what we are referring to is the generalized improvement in benchmark performance on a clock-normalized and core-normalized basis. (same clockspeed, same core count, typically single threaded)

True, but if I really wanted to get into that I would just sign up on the RWT forums or post in comp.arch (if that still good, haven't posted there in a long time).

I'm comfortable using the later definition for a typical tech forum like this. Sure it's inaccurate, as it misses issues like a larger higher speed L3$ boosting effective throughput by lowering latency, but not really affecting IPC. But at least it easily allows us to separate performance differences due to chip arch vs clockspeed vs memory throughput, etc.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
Having an on-die ssd controller means you could have NAND DIMMS installed in a NAND DIMM socket, giving us inexpensive acess to NAND. There would be no need for a sata controller, or enclosure, or all the marketing and related budgeting that goes into such products. A NAND DIMM would not cost much more than the market rate for NAND, just like DDR3 DIMMS do not cost much more than the market rate for the DDR3 chips themselves. In other words, we would be able to buy 64GB NAND DIMMS for about $50. It would cost less than $5 per motherboard to add two 8 channel NAND DIMM sockets.

We are rapidly approaching the reality of being able to use PCM or FRAM or MRAM or whatever new type of RAM to replace both flash and DDR. But in order to do that we must already have in place a controller that integrates the two types of memory as if they were one. The IMC on the cpu die is the obvious place this should occur. It is going to happen on an ARM SoC, of that I have no doubt. If intel/amd/microsoft does not see this then their "faster" cpus are going to get totally blindsided by the advantages from having memory unification. Especially microsoft, since has so much of its OS wrapped up in managing these two different types of memory.

Look at the speeed of a single NAND DIMM...and you idea falls flat on the face.
Only reason that SSD are fast are because the bundle a load of NAND DIMM's in parrallel...

So I will kindly pass on your idea...as it would be a slowdown...and downgrade.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
Look at the speeed of a single NAND DIMM...and you idea falls flat on the face.
Only reason that SSD are fast are because the bundle a load of NAND DIMM's in parrallel...

So I will kindly pass on your idea...as it would be a slowdown...and downgrade.

If you actually knew how you read you'd see in the very text you quoted that I was talking about a dual channel 8 wide ssd controller. Just like DDR dimms. Visualization fail. It would not be slower. It would be much much, MUCH faster because there is no chipset + sata latency. All the caching would be done with system ram and the general performance would be in the range of 4-10x as fast as current Vertex 3.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
If you actually knew how you read you'd see in the very text you quoted that I was talking about a dual channel 8 wide ssd controller. Just like DDR dimms. Visualization fail. It would not be slower. It would be much much, MUCH faster because there is no chipset + sata latency. All the caching would be done with system ram and the general performance would be in the range of 4-10x as fast as current Vertex 3.

I'd rather have no IGP, no onboard controller and no onboard NIC...sorry.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Your challenge in cooling is significantly easier than that faced by the engineers trying to bring 4 DDR3 memory channels out of the CPU to one side of the board with consistent trace impedance.

Thanks for your insight.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
ummmm....

I dont think so...

If you want to talk about ram speed... DDR3-2100 was possible on LGA1366.

If you ask me... the idiot at intel wanted a different look on intel a platform vs AMD because AMD started copying names off intel.

You are missing the point of my post with spectacular aplomb.

If you are truly convinced that the layout of the dimms is for any reason other than engineering as a tradeoff between cost, signal timing and integrity, compatibility, and so on then I will disagree with you.

But I also won't belabor the point, if you insist on thinking its all marketing priorities and "the idiot at intel" then I am inclined to leave you to your own devices.
Tip-Hat1.gif
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
True, but if I really wanted to get into that I would just sign up on the RWT forums or post in comp.arch (if that still good, haven't posted there in a long time).

I'm comfortable using the later definition for a typical tech forum like this.

You and me both :)
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,841
3,189
126
But I also won't belabor the point, if you insist on thinking its all marketing priorities and "the idiot at intel" then I am inclined to leave you to your own devices.
Tip-Hat1.gif

I need to tos some fire and trolls at intel so i can keep the AMD guys at bay with me..

:biggrin:
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Triple channel -> Quad channel, it would be miraculous if trace lengths and number of board layers was not a major consideration for SB-E's RAM layout. Just a quick check of motherboards shows how the "lower" cost ones provide only 4 memory slots, implementing 8 slots of quad channel memory is not easy nor cheap.

Now whether Triple channel memory would have been fine for SB-E in almost every task it would be put to use on, that I don't know.

ummmm....

I dont think so...

If you want to talk about ram speed... DDR3-2100 was possible on LGA1366.

If you ask me... the idiot at intel wanted a different look on intel a platform vs AMD because AMD started copying names off intel.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
4 channels DDR3 registered memory interface on each CPU
o 2 DDR3 slots per channel per processor (total of 16 DIMMs on the motherboard)
o RDIMM/LV-RDIMM (1.35V/1.25V), LRDIMM, and UDIMM/LV-UDIMM (1.35V/1.25V)
o SR, DR, and QR DIMMs
o DDR3 speeds of 800/1066/1333/1600
o Up to maximum 512GB memory with 32GB RDIMMs

Note the registered memory. Of more relevance will be how Intel implements quad channel for servers.

Back to the topic at hand, will DDR4 show up in the Haswell timeframe?

Complete bs...interlags qua chanel board with one one side...

Opencompute.org/projects/amd-motherboard/
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Triple channel -> Quad channel, it would be miraculous if trace lengths and number of board layers was not a major consideration for SB-E's RAM layout. Just a quick check of motherboards shows how the "lower" cost ones provide only 4 memory slots, implementing 8 slots of quad channel memory is not easy nor cheap.

Now whether Triple channel memory would have been fine for SB-E in almost every task it would be put to use on, that I don't know.

For SB-E, triple-channel would have been plenty. I believe Intel's recomendation was ~1 memory channel per 2-3 cores on a standard desktop/workstation machine. Looking forward, quad-channel will be more important for 8-10 core machines. With IB-E supposedly offering at least 8 cores, it does make sense. Again, it was probably cheaper to keep this in-line with the Xeon models than create something 'special' for the enthusiast line. IB/SE uses dual-channel, and SB-E uses quad channel. I wonder if triple-channel will be standard for Haswell, or the next CPU afterwards? I would venture when hexacores are 'the norm' then > dual channel will appear.
 

grkM3

Golden Member
Jul 29, 2011
1,407
0
0
watch compudex and idf vidoes on youtube...you can see where intel is going with haswell and they mention it completing the ultra book section what ivy is starting.

People are moving away from desktops and intel is going the power saving route.They have more than enough performance for day to day tasks and they are trying to get power draw down as much as possible with haswell.

It was made from the ground up to be as efficient as possible.

its hard to understand but watch this video and then the follow up at IDF where they talk more about haswell.

they are shooting for ultra portible and ultra thin laptops with there next gen stuff and they will perform a little better but intel is clearly trying to lower power use and heat with ivy/haswell

http://www.youtube.com/watch?v=OPH1Xz1y9H8

I cant find the follow up video but the guy said its over 20x less power than sandy at the same level of performance and will last over 10 days with connected standby
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
watch compudex and idf vidoes on youtube...you can see where intel is going with haswell and they mention it completing the ultra book section what ivy is starting.

People are moving away from desktops and intel is going the power saving route.They have more than enough performance for day to day tasks and they are trying to get power draw down as much as possible with haswell.

It was made from the ground up to be as efficient as possible.

its hard to understand but watch this video and then the follow up at IDF where they talk more about haswell.

they are shooting for ultra portible and ultra thin laptops with there next gen stuff and they will perform a little better but intel is clearly trying to lower power use and heat with ivy/haswell

http://www.youtube.com/watch?v=OPH1Xz1y9H8

I cant find the follow up video but the guy said its over 20x less power than sandy at the same level of performance and will last over 10 days with connected standby

Yep, With Ultra Book being limited to dual core we just have to wonder what TDP goal Intel has planned for this product category.....at 14nm....at 10nm?

Are they going to try and get these mainstream dual core chips eventually into Tablets? My guess would be yes......but maybe this will not happen until 10nm?
 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
Yep, With Ultra Book being limited to dual core we just have to wonder what TDP goal Intel has planned for this product category.....at 14nm....at 10nm?

The only published information that I can find is the ultrabook spec for Haswell which has it pegged at 15W TDP for those parts. Unfortunately, I don't see any publications or statements for Broadwell and on.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,346
1,525
136
If you actually knew how you read you'd see in the very text you quoted that I was talking about a dual channel 8 wide ssd controller. Just like DDR dimms. Visualization fail. It would not be slower. It would be much much, MUCH faster because there is no chipset + sata latency. All the caching would be done with system ram and the general performance would be in the range of 4-10x as fast as current Vertex 3.

Umm, no. At present, Chipset + sata latency is irrelevant to storage. And they would not be able to put a SSD controller with as many channels as the standalone ones in the CPU socket, simply because they run out of room for pads on the CPU. CPU integrated SSD controllers would suck.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
231
106
The only published information that I can find is the ultrabook spec for Haswell which has it pegged at 15W TDP for those parts. Unfortunately, I don't see any publications or statements for Broadwell and on.
15W is a lot. Hopefully, there will be more options. Anybody got a list of the whole line-up? Sorry, if this has already been mentioned.
 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
15W is a lot. Hopefully, there will be more options. Anybody got a list of the whole line-up? Sorry, if this has already been mentioned.

I'm just quoting wiki at this point:
15W TDP processors for the Ultrabook platform
37, 47, 57W TDP for mobile processor
35, 45, 65, 95W TDP for desktop processors

http://en.wikipedia.org/wiki/Haswell_(microarchitecture)

Yeah and 15W may sound like a lot if you're comparing it with iPads and all, but the Atom design team is supposed to cover that range
 
Last edited: