• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Intel talks ultra-efficient Haswell architecture

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Demo was staged, it was an example. The solar cell was not actually powering the machine.

I thought they proved it was powering the machine when the presenter put his hand in front of the lamp, thus blocking the light to the solar cell, and the machine froze?

You should check out Mark Bohr's IDF presentation on 22nm tri-gate. It includes a timeline of recent transistor developments, as well as his take on where other semiconductor mfrs are relative to developing similar processes. I posted info on how to find it in this thread: http://forums.anandtech.com/showthread.php?t=2191594

You're talking about this:
IntelxtorLeadershipIDF201122nmslide34.jpg
 
I thought they proved it was powering the machine when the presenter put his hand in front of the lamp, thus blocking the light to the solar cell, and the machine froze?

The only thing they proved is the machine stopped what it was doing when the solar cell was blocked.

It would be very simple for Intel to read to the voltage coming from a solar cell and have some software running in background that would put the CPU into a low power state when that voltage dropped - if they even went that far.
 
The only thing they proved is the machine stopped what it was doing when the solar cell was blocked.

It would be very simple for Intel to read to the voltage coming from a solar cell and have some software running in background that would put the CPU into a low power state when that voltage dropped - if they even went that far.

Well if we really want to put our tinfoil hats on and go down the conspiracy path, how do we even know the system itself was a Haswell chip? It could have been a Cedar Mill P4 for all we really know.

Furthermore how can we even be sure IDF2011 itself ever happened? Could have been staged in Hollywood next door to the moon landing stage, or entirely simulated with computers using nothing more than special effects.

Considering everything else you are willing to take at face value to be true in life while not being able to touch it and see it in person, it seems inconsistent for you to select this one particular aspect of the IDF presentation to call shens on.
 
Well if we really want to put our tinfoil hats on and go down the conspiracy path, how do we even know the system itself was a Haswell chip?

It wasn't running Haswell.

http://www.anandtech.com/show/4775/haswell-design-complete-solar-powered-demo-at-idf

Considering everything else you are willing to take at face value to be true in life while not being able to touch it and see it in person, it seems inconsistent for you to select this one particular aspect of the IDF presentation to call shens on.

I've actually presented system demonstrations that had nothing behind them. For example an inventory management system that was nothing but mocked up screens. I consider these kind of things - hardware or software - to be nothing other than propaganda.
 
Last edited:
It wasn't running Haswell.

http://www.anandtech.com/show/4775/haswell-design-complete-solar-powered-demo-at-idf



I've actually presented system demonstrations that had nothing behind them. For example an inventory management system that was nothing but mocked up screens. I consider these kind of things - hardware or software - to be nothing other than propaganda.

There's a difference between something being misreported versus something being misrepresented.

Anandtech misreported what Intel presented, doesn't make the claims by Intel in their presentation any less valid or suspicious.

I'm assuming the demo was actually that of 32nm based atom.
 
There's a difference between something being misreported versus something being misrepresented.

Anandtech misreported what Intel presented, doesn't make the claims by Intel in their presentation any less valid or suspicious.

I'm assuming the demo was actually that of 32nm based atom.

Well, it was during the Haswell portion of the keynote, so you would expect they are talking about Haswell. Also note that every news agency and tech site is reporting it as a Haswell demo.

Here's what the press release has to say: "Intel's researchers have created a chip that could allow a computer to power up on a solar cell the size of a postage stamp. Referred to as a "Near Threshold Voltage Core," this Intel architecture research chip pushes the limits of transistor technology to tune power use to extremely low levels."

So they aren't officially saying what was demonstrated.
 
Last edited:
That was the very first thing I noticed. Only one cpu usage window. But they could have made it show the usage of all cores in the task manager which would only show as a single core. They may have done that to hide those details on purpose.
 
Can you explain it more, in layman's terms sir? I am intrigued!

The idea of SIMD (Like SSE and AVX) instructions is to simultaneously execute the same instructions on multiple pieces of data. Right now, modern x86 cpu's have a gazillion different SIMD instructions they can use to do pretty much anything that could be interesting once their data is already in the SIMD vector registers. However, to get the data in there you either need to store it in memory contiguously, or to do single loads and shuffle the items around in memory.

You can get sensible SIMD today by handcoding you program to use SSE registers and preparing the memory layout of your objects to support it. However, very few people do this. Most people program in a higher-level language, and just let the compiler handle the details. This is a problem because the compiler cannot sensibly turn object layouts into SIMD-friendly ones, meaning that if it wants to automatically make use of SIMD, the best it can do is to put 4 loads followed by shuffles for every load. Given that a huge proportion of normal FP workload is loads, this isn't exactly optimal. Autovectorization (the art of compiler taking non-simd code and turning it into simd code) of x86 is thus still in it's infancy -- most compilers choose not to do it, and the ones that do usually have disappointing results.

Gather is an instruction that takes a vector of memory offsets and a base, and fills a vector with the elements found in those addresses. It should make it possible for compilers to sanely and easily vectorize normal code.
 
IMG_0273_575px.JPG


- Haswell is designed for a 10 - 20W range of TDPs for mainstream clients, this is down from 35 - 45W with Sandy and Ivy Bridge today

- Configurable TDP (just like Ivy)

- Rectangular die (my guess is this is likely a result of GPU taking up more and more space as part of the die, similar to IB):

IMG_0257_575px.JPG
 
- Rectangular die (my guess is this is likely a result of GPU taking up more and more space as part of the die, similar to IB):

IMG_0257_575px.JPG
Looks like the GPU area in Ivy Bridge is pretty significant, also it also seems like dual GPUs in there (judging by the die pattern in the GPU area). Its possible for those dual GPUs to increase performance (up to 60&#37😉 in similar manner with NVIDIA SLI or ATI CrossFire systems (speculation). 😱
 
Looks like the GPU area in Ivy Bridge is pretty significant, also it also seems like dual GPUs in there (judging by the die pattern in the GPU area). Its possible for those dual GPUs to increase performance (up to 60%) in similar manner with NVIDIA SLI or ATI CrossFire systems (speculation). 😱

Doubtful. Remember that SB is actually dual GPU if you look at the die. 6 for the 2000 series and 12 for the 3000 series.
 
- Haswell is designed for a 10 - 20W range of TDPs for mainstream clients, this is down from 35 - 45W with Sandy and Ivy Bridge today

Witcha is out 2012 right? so Haswell is gonna compete with whatever comes after it?

Amd gonna have a tough time, in the ultra books vs haswell I think.
 
Looks like the GPU area in Ivy Bridge is pretty significant, also it also seems like dual GPUs in there (judging by the die pattern in the GPU area). Its possible for those dual GPUs to increase performance (up to 60%) in similar manner with NVIDIA SLI or ATI CrossFire systems (speculation). 😱

It doesn't look anything like a dual GPU on a Sandy Bridge die....

Guys, don't be fooled by the image of what is supposed to be an Ivy Bridge, because its not. Intel did a JFAMD/Orochi die-map thing here, not sure why the linked photo above has cropped off the disclaimer that was at the bottom of the Intel slide but there is white text at the bottom that says the die-shot is merely a simulated representation of Ivy Bridge.
 
Guys, don't be fooled by the image of what is supposed to be an Ivy Bridge, because its not. Intel did a JFAMD/Orochi die-map thing here, not sure why the linked photo above has cropped off the disclaimer that was at the bottom of the Intel slide but there is white text at the bottom that says the die-shot is merely a simulated representation of Ivy Bridge.
Just noticed that from here >> IDF 2011: Intel Looks to Take a Bite Out of ARM, AMD With 3D FinFET Tech. With so many images, and on different sites... I've kinda missed the "fine print". 😀

21851_22_Ivy_Bridge.png


Though would be more interesting to actually get hold of a real picture of Ivy Bridge die. 😉
 
Guys, don't be fooled by the image of what is supposed to be an Ivy Bridge, because its not. Intel did a JFAMD/Orochi die-map thing here, not sure why the linked photo above has cropped off the disclaimer that was at the bottom of the Intel slide but there is white text at the bottom that says the die-shot is merely a simulated representation of Ivy Bridge.

I didn't even see that disclaimer. Oops.
 
Doubtful. Remember that SB is actually dual GPU if you look at the die. 6 for the 2000 series and 12 for the 3000 series.

Its a dual GPU in much the same way as a GTX 580 is a hexadeca GPU or a 6970 is a ?quadicosa? GPU. We still classify SIMDs and SMs as making up a single GPU.
 
Its a dual GPU in much the same way as a GTX 580 is a hexadeca GPU or a 6970 is a ?quadicosa? GPU. We still classify SIMDs and SMs as making up a single GPU.

Yes, I am aware of that. But the person I was responding to was referring to dual GPU as in SLI.
 
AMD is still trying to get BD out the door, and Intel is talking 22nm and tri-gate.

I think the competition is over, and we all have to hope that Intel doesn't try and rake us (and by extension our wallets) over the coals.
 
Back
Top