• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

4th Generation Intel Core, Haswell summarized

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Charles Kozierok

Elite Member
May 14, 2012
6,762
1
0
I'm hopeful that eventually we will actually get to a more heterogenous computing environment. I just think it's going to take time. And at some point, we are also going to run into conflicts between general purpose routines wanting to use GPU resources, and video itself wanting to use them. This may even further shift the balance towards GPUs and away from CPUs.

OTOH, I would also not be surprised if the grand push for heterogenous computing doesn't really pan out -- just as a few years ago many were convinced they'd improve performance by just throwing increasing numbers of cores at the problem.

Most panaceas don't turn out to be.
 

NTMBK

Lifer
Nov 14, 2011
10,456
5,842
136
Ya your right and NV wants to play in the cpu markets though and that changes the rule . Intel had intended this be the year that the PCI-e disappeared . But because of infighting NV managed to get 5 more years on a intel pci-e . The rules changed when NV created its own processor . Intel does not have to supply access to intels tek to a company like nv in the near future as nv has its own products it can piggyback. Intel has PHi they could careless about cuda cores

If Intel make servers which can't take PCIe cards, then they won't be able to sell them to a lot of people. i.e. anyone who cares about dedicated storage controllers, network controllers, coprocessors (e.g. NVidia). And Intel aren't in the habit of making products they can't sell. :) Any evidence of Intel wanting to phase out PCIe?

Intel does not have APUs They have Igpu The I is for Intel . APU is a term coined by AMD . Your welcome to use the term . But not with intel products . You can't debate me on this . You need to debate intel on this . GPU VPU . 2 names same meaning . But ATIs VPU is seldom used . GPUs are comput units with vary small cores . PHI is a co processor with many x86 baby cores along with vector compute. Show me 1 link were Intels says SB/IB is a APUs . You can't Because Intel will never use APU as a term for their tek. ever. SO no intel does not have APUs they have lots of COMPUTE processing cores . Explain to us all, the differance between Comput processing cores and accelerated proccessing,, cores are cores its just how much punch each core has. NICE try . But your lieing. Fact is these so called accelerator cores are the weakest of the weak. thats why it takes thousands to do about the same work as 1 intel haswell core and even than there is much these small accelerator cores of amd cann't do on there own in x86.

Lots of small vector cores are good at highly parallel tasks, large CPU cores are good at others. That's why graphics processing is not done on the CPU cores. IGPU refers to the Graphics Processing Unit which has been Integrated into Intel's APUs. :) Intel has enabled OpenCL support on it's Ivy Bridge graphics, so it clearly agrees that it is a good idea.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
If Intel make servers which can't take PCIe cards, then they won't be able to sell them to a lot of people. i.e. anyone who cares about dedicated storage controllers, network controllers, coprocessors (e.g. NVidia). And Intel aren't in the habit of making products they can't sell. Any evidence of Intel wanting to phase out PCIe?
Yes but Google is your friend . You can start with Atoms release . and what NV cried about . A five year agreement created threw the FCC ruling. Than in the nv /intel lawsuite NV and intel came to another agreement . and that time frame was extended . The Pcie isn't going away just the internal socket. Even intel admitts that whats inside your compute item will be less knowen than present as form factor takes front stage
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Lots of small vector cores are good at highly parallel tasks, large CPU cores are good at others. That's why graphics processing is not done on the CPU cores. IGPU refers to the Graphics Processing Unit which has been Integrated into Intel's APUs. Intel has enabled OpenCL support on it's Ivy Bridge graphics, so it clearly agrees that it is a good idea

Troll post . How intel is doing things and how AMD is doing things are differant worlds . You also said that intels graphics go inside intels APUs . Intel has no APUs we all know this yet your picking a fight . If I did the same I would get a notice from the mod. giving me point deductions on such a clear cut out and out lie intended for trolling purpose only . Intel is the leader . We don't follow the terms a dink ass company like AMD offers us they are not leaders but followers AVX for instances. SSE AMDs crowning success in 86 is AMD 64 . That was either good or bad depending on your perspective.
 
Last edited:

NTMBK

Lifer
Nov 14, 2011
10,456
5,842
136
Troll post . How intel is doing things and how AMD is doing things are differant worlds . You also said that intels graphics go inside intels APUs . Intel has no APUs we all know this yet your picking a fight . If I did the same I would get a notice from the mod. giving me point deductions on such a clear cut out and out lie intended for trolling purpose only . Intel is the leader . We don't follow the terms a dink ass company like AMD offers us they are not leaders but followers AVX for instances. SSE AMDs crowning success in 86 is AMD 64 . That was either good or bad depending on your perspective.

How is pointing out that Intel has produced drivers which allow the GPU portion of their APU accelerate certain non-graphics tasks a troll post?

Given that AMD64 completely trounced Intel's Itanium 64-bit architecture for good reasons, and made 64 bit processors affordable for the masses, I think most people would see that as a good thing. :)
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
. We don't follow the terms a dink ass company like AMD offers us they are not leaders but followers AVX for instances. SSE AMDs crowning success in 86 is AMD 64 . That was either good or bad depending on your perspective.

Yeah, that's why we use IA-32e or EM64T instead of X86-64.
 

Edrick

Golden Member
Feb 18, 2010
1,939
230
106
Given that AMD64 completely trounced Intel's Itanium 64-bit architecture for good reasons, and made 64 bit processors affordable for the masses, I think most people would see that as a good thing.

Everyone trounced the Itanium :) Even Intel's own Xeons did (along with AMD, IBM, etc).
 

Edrick

Golden Member
Feb 18, 2010
1,939
230
106
APU is CPU and GPU on a single die. Hence the entire Ivy/Sandy Bridge line are APUs. :)

I do not agree with that. In my opinion, the GPU needs to be able to perform compute tasks similar to AMDs APUs and Nvidia GPUs. Currently, Intel is far behind in this area and their iGPUs are designed mainly for just graphics. Intel is making headway into this "APU" area however, and Skylake may be the first Intel "APU" in this term.
 

NTMBK

Lifer
Nov 14, 2011
10,456
5,842
136
I do not agree with that. In my opinion, the GPU needs to be able to perform compute tasks similar to AMDs APUs and Nvidia GPUs. Currently, Intel is far behind in this area and their iGPUs are designed mainly for just graphics. Intel is making headway into this "APU" area however, and Skylake may be the first Intel "APU" in this term.

The HD4000 is OpenCL capable, meaning it is able to perform compute tasks. :)

Everyone trounced the Itanium Even Intel's own Xeons did (along with AMD, IBM, etc).

I meant in terms of instruction sets, Intel's vision of the future for it's x86 customers (IA64, with x86 emulation) was defeated by AMD's vision (x86-64). Which was good for everyone, Intel included.
 

Edrick

Golden Member
Feb 18, 2010
1,939
230
106
The HD4000 is OpenCL capable, meaning it is able to perform compute tasks.

That is because they are trying to move in that direction, and adding OpenCL now makes sense. Try running any distributed computing task on a HD4000 and let me know how that worked out for you when it finishes in a week. :)

Again, give Intel a few more generations before we start calling them "APUs".
 

NTMBK

Lifer
Nov 14, 2011
10,456
5,842
136
That is because they are trying to move in that direction, and adding OpenCL now makes sense. Try running any distributed computing task on a HD4000 and let me know how that worked out for you when it finishes in a week. :)

Again, give Intel a few more generations before we start calling them "APUs".

Given that AMD classifies its E350 and E450 as APUs, I certainly think Ivy Bridge would qualify. ;) Besides, take a look at Anand's benchmarks, Ivy Bridge performance in GPGPU isn't as catastrophic as you might expect: http://www.anandtech.com/show/5771/the-intel-ivy-bridge-core-i7-3770k-review/19

Partly this is down to the fact that Intel have managed to couple their APUs more tightly than AMD so far by sharing L3 cache between CPU and GPU portions.
 

Edrick

Golden Member
Feb 18, 2010
1,939
230
106
Besides, take a look at Anand's benchmarks, Ivy Bridge performance in GPGPU isn't as catastrophic as you might expect:

I would not say catastrophic, but not worth the money at the moment. I currently have a SB driven server which just does DC tasks all day long. I was going to spend the $180 to upgrade it to a IB setup, mainly to use the iGPU for number crunching as well. After doing research on many DC sites, it became clear that it would have been waste of money.

But you are right, technically it can be considered an "APU" in the general sense of the term. But until the iGPU can perform parallel tasks as fast or faster than a single CPU core, then in my opinion, it is just a graphics processor.
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
Don't forget vector workloads, and any scalar floating-point workload for that matter too. All of these benefit from having execution ports 0 and 1 available for vector or floating-point operations, while the new port 6 takes over the ALU, shift and branch operations from port 0 (with port 5 offloading port 1):

<image>
For those wondering, the shift ability of port 6 is mentioned in ARCS001.
It's also safe to assume they reduced clock frequency a little to meet the timing requirements and lower the power consumption.
Not if they make a minor compromise like increasing latency on 64-bit operations...
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
I'm hopeful that eventually we will actually get to a more heterogenous computing environment. I just think it's going to take time. And at some point, we are also going to run into conflicts between general purpose routines wanting to use GPU resources, and video itself wanting to use them. This may even further shift the balance towards GPUs and away from CPUs.
Why would you want heterogeneous computing?

DirectX 10 was born out of the unification of vertex and pixel processing. It allowed to balance the workloads for better performance and do things in vertex and pixel shaders that weren't possible before. We've since moved on to DirectX 11, adding a lot of new graphics pipeline stages which also weren't feasible without unified processing cores.

That's what homogeneous computing has done for the GPU. Likewise, you don't want to try to "augment" a CPU with a GPU. What you want instead is to unify their architecture. AVX2 is a huge step in that direction, and I'm sure it's not the last one. AVX can be extended up to 1024-bit!
OTOH, I would also not be surprised if the grand push for heterogenous computing doesn't really pan out -- just as a few years ago many were convinced they'd improve performance by just throwing increasing numbers of cores at the problem.

Most panaceas don't turn out to be.
Exactly. The CPU's vector performance has doubled every two years. Why try and force developers to limit their possibilities with heterogeneous computing when homogeneous computing is offering everything they could wish for?
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
Lots of small vector cores are good at highly parallel tasks, large CPU cores are good at others. That's why graphics processing is not done on the CPU cores.
You're oversimplifying it:

Calling those ALUs "CUDA cores" is crazy marketing talk, like saying a V8 engine has "eight motors" - TechReport

Using the same logic, a mainstream Haswell CPU will have 64 compute cores, at about three times the clock frequency of a GPU. The reason graphics processing is typically done on a GPU, is mainly because of the texture samplers it has, not because of a lack of computing power on the CPU. And that's changing as well. Texture sampling is becoming programmable, and it mainly consists of gather operations. AVX2 features gather operations...
Intel has enabled OpenCL support on it's Ivy Bridge graphics, so it clearly agrees that it is a good idea.
They already had an OpenCL implementation for the CPU. Retargeting it for the GPU must have been relatively straightforward. That doesn't mean they think running OpenCL on the iGPU is a good idea.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
you got it reversed (blue is for "tocks" apparently)

LOL, whoops :p

This is why I was kicked off MacGuyver, was always cutting the red wire after being told to cut the blue wire. Hmmm, maybe MacGruber is still hiring?
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I would not say catastrophic, but not worth the money at the moment. I currently have a SB driven server which just does DC tasks all day long. I was going to spend the $180 to upgrade it to a IB setup, mainly to use the iGPU for number crunching as well. After doing research on many DC sites, it became clear that it would have been waste of money.

But you are right, technically it can be considered an "APU" in the general sense of the term. But until the iGPU can perform parallel tasks as fast or faster than a single CPU core, then in my opinion, it is just a graphics processor.


Really when intel can use its igpu as fast as its Haswell processor to complete its task than we can change the name . LOL
Thats a bit silly than AMDs Igpu can't be called an APU until it can beat haswell at parrell compute task . or do we start with Arm and work our way up / Ya I thought the I meant Intel. But if its intergrated than I guess AMD is an IGPU. I have a feeling intel will stay with Cpu once its IGPU is fully intergrated . So intel had APU back in the beginning than . What graphics engine was used way. back when. So pong was the first APU that AMD is clammering about . not to reassurring
 

mikk

Diamond Member
May 15, 2012
4,308
2,395
136
But you are right, technically it can be considered an "APU" in the general sense of the term. But until the iGPU can perform parallel tasks as fast or faster than a single CPU core, then in my opinion, it is just a graphics processor.


This is already the case actually. Depending on the workload CLBenchmark runs faster on HD4000 than 4 CPU cores. In Caps Viewer most OpenCL tests runs faster on the iGPU. Luxmark over 4 cores are faster on the CPU but on just 1 CPU core iGPU performs better. Not only OpenCL. The GPU Browser test Fish IE Tank runs much much faster with GPU enabled compared to CPU only.
 

NTMBK

Lifer
Nov 14, 2011
10,456
5,842
136
Ya I thought the I meant Intel. But if its intergrated than I guess AMD is an IGPU.

AMD's APU contains both CPU and (i)GPU portions, yes, as does Intel's.

I have a feeling intel will stay with Cpu once its IGPU is fully intergrated . So intel had APU back in the beginning than . What graphics engine was used way. back when. So pong was the first APU that AMD is clammering about . not to reassurring

No, that's still a homogeneous solution (single processor on one die), so not an APU. An APU is CPU + GPU ( + other processors, like FPGA, or decode hardware like Intel's Quicksync) all on the one chip.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
AMD's APU contains both CPU and (i)GPU portions, yes, as does Intel's.



No, that's still a homogeneous solution (single processor on one die), so not an APU. An APU is CPU + GPU ( + other processors, like FPGA, or decode hardware like Intel's Quicksync) all on the one chip.

BY AMDS definition . AMD is nobody . Who created the marrage law . God gave adam 2 mates . The first who was like Adam the second was from adam . The first went to heaven to tell God what occurred, And God agreed with her so she was allowed to stay with God just as second was belittled because she lacked what the first mate who was like adam. There was no marriage at all. Some retard created a binding contract as if he new the future between a man and his mate. You are like eve. Subjective to a AMD definition and than you apply that to intel . Were do you or AMD have the right to name Intel products . You don't . Intel says it has NO APU . A compute unit is a compute unit . How powerful that unit is doesn't redefine it. Mate somehow got changed to marriage . For the male many wives allowed for the female only 1 mate allowed . Adams first mate got it right the second mate spawned from adam is complete failure. who gave man the right to create a binding marriage law were one mate has complete control over the other . it didn't come from god but a insane man who thought he talked with god. amd doesn't name intel products but they try hard to make it seem like they can . thunderbolt Intel Lightening bolt from AMD . real orginal work by AMD and amd had no hand in usb development intel did the work. same as with usb3 . remember the debate here were both nv and amd fans were saying that intel was holding up the release of usb 3 so intel could gain an advantage, We all know how that turned out don't we . They were a bunch of conspirecy nuts they were.
 
Last edited:

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Everyone trounced the Itanium :) Even Intel's own Xeons did (along with AMD, IBM, etc).

So true but we have my first account here at AT. Under name zinn2b , After harold zinn.
Read what I said about itanic back than and were intel was heading . 12 years later I was right. Same applies to NV sli . I was right about that also
 

NTMBK

Lifer
Nov 14, 2011
10,456
5,842
136
BY AMDS definition . AMD is nobody . Who created the marrage law . God gave adam 2 mates . The first who was like Adam the second was from adam . The first went to heaven to tell God what occurred, And God agreed with her so she was allowed to stay with God just as second was belittled because she lacked what the first mate who was like adam. There was no marriage at all. Some retard created a binding contract as if he new the future between a man and his mate. You are like eve. Subjective to a AMD definition and than you apply that to intel . Were do you or AMD have the right to name Intel products . You don't . Intel says it has NO APU . A compute unit is a compute unit . How powerful that unit is doesn't redefine it. Mate somehow got changed to marriage . For the male many wives allowed for the female only 1 mate allowed . Adams first mate got it right the second mate spawned from adam is complete failure. who gave man the right to create a binding marriage law were one mate has complete control over the other . it didn't come from god but a insane man who thought he talked with god. amd doesn't name intel products but they try hard to make it seem like they can . thunderbolt Intel Lightening bolt from AMD . real orginal work by AMD and amd had no hand in usb development intel did the work. same as with usb3 . remember the debate here were both nv and amd fans were saying that intel was holding up the release of usb 3 so intel could gain an advantage, We all know how that turned out don't we . They were a bunch of conspirecy nuts they were.

...so, you're voting yes to gay marriage?