Haswell to Broadwell IPC

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dahorns

Senior member
Sep 13, 2013
550
83
91
Actually the i7-5775C is a replacement for whatever you put into Socket 1150 of the Z97 / H97 motherboard before, except for people who build systems now the i7-4970K actually may be a better choice.

Could you also link some of those reviews you mentioned. I genuinely hope to see some proper reviews and comparisons.
Including the ol' Haswell in benchmark comparisons would make the new CPU look bad, something Intel probably wouldn't sponsor. A website that includes a direct comparison to the 4770K and/or the 3770K would be my personal "most trusted in tech since 2015".

Uh, this website for one? Is this a trick question?
 

dahorns

Senior member
Sep 13, 2013
550
83
91
Earlier you said the 5775 seems faster than the 4770K, I just hoped you could back up that claim.

Read the reviews? I mean, the 5775 at a slower clock is ahead in some benches and behind in others, and at a significantly lower TDP. That lower TDP is all the more impressive given how much of it must be taken up by the graphics unit.

Your point seemed to be that because the 5775c was slightly slower in clockspeed, that (Intel) desktop computing was slowing down. That is an odd conclusion to make based on a niche product with a significantly lower TDP than your comparison models, especially considering that Skylake processors with higher TDP and clock speeds are just around the corner.

I don't think the 5775c is meant to be a "replacement" chip. It is meant for people with new builds that want a difference balance of igpu and cpu.
 
Last edited:

know of fence

Senior member
May 28, 2009
555
2
71
Niche product and giving people what they asked for, is obviously just marketing noise. Intel came up with this tick tock idea, that's something that they promised.

Notebooks and all this mobile nonsense absolutely need integrated graphics just to drive high resolution displays. So after all low voltage chips have been binned, what do you do with the rest, well you create a new higher power target TDP, but rather low clocks to maximize yields. Voilà, Broadwell-C is born.

So A TDP of 65 W isn't meaningful at all. The i7-4777R (3.2/3.9 GHz) also had a 65W TDP, the i7-4790S (can even turbo up to 4.0)

If you compare 4th and 5Th Generation Intel Core i7, on ark.intel.com you see that across the board even the high end 47W mobile chips all went down in clocks. Maybe trading graphics for peak performance is all well and good for mobile device TDP, but certainly not for the desktop. According to Tomshardware the iGPU consumption amounts to less than 20W (Torture test minus CPU Only).

For a while now Intel has been pushing consumers towards the low performance, form over function devices that are destined for quick obsolescence. Considering that iGPUs are only limited by memory bandwidth and considering that we get a massive boosts from DDR4 and texture compression (like seen in Nvida Maxwell). Skylake CPUs will be able to pack twice as many graphics EUs and maybe dedicate 30 W or more of their total TDP to graphics. Without any mention of G-sync / A-sync and with dodgy 4K interfaces at just 60 Hz, what a colossal waste it could turn out to be!
 

Hulk

Diamond Member
Oct 9, 1999
5,168
3,786
136
Niche product and giving people what they asked for, is obviously just marketing noise. Intel came up with this tick tock idea, that's something that they promised.

Notebooks and all this mobile nonsense absolutely need integrated graphics just to drive high resolution displays. So after all low voltage chips have been binned, what do you do with the rest, well you create a new higher power target TDP, but rather low clocks to maximize yields. Voilà, Broadwell-C is born.

So A TDP of 65 W isn't meaningful at all. The i7-4777R (3.2/3.9 GHz) also had a 65W TDP, the i7-4790S (can even turbo up to 4.0)

If you compare 4th and 5Th Generation Intel Core i7, on ark.intel.com you see that across the board even the high end 47W mobile chips all went down in clocks. Maybe trading graphics for peak performance is all well and good for mobile device TDP, but certainly not for the desktop. According to Tomshardware the iGPU consumption amounts to less than 20W (Torture test minus CPU Only).

For a while now Intel has been pushing consumers towards the low performance, form over function devices that are destined for quick obsolescence. Considering that iGPUs are only limited by memory bandwidth and considering that we get a massive boosts from DDR4 and texture compression (like seen in Nvida Maxwell). Skylake CPUs will be able to pack twice as many graphics EUs and maybe dedicate 30 W or more of their total TDP to graphics. Without any mention of G-sync / A-sync and with dodgy 4K interfaces at just 60 Hz, what a colossal waste it could turn out to be!

Good post.
 

dahorns

Senior member
Sep 13, 2013
550
83
91
Niche product and giving people what they asked for, is obviously just marketing noise. Intel came up with this tick tock idea, that's something that they promised.

Ok? And we already know that the 14nm cadence is screwed up. You aren't getting a proper shrink of Desktop Haswell and the 5775c isn't suppose to be it. Skylake will be out at the end of the summer with plenty high clocks. I don't understand what you're complaining about.

Notebooks and all this mobile nonsense absolutely need integrated graphics just to drive high resolution displays. So after all low voltage chips have been binned, what do you do with the rest, well you create a new higher power target TDP, but rather low clocks to maximize yields. Voilà, Broadwell-C is born.

As these are edram parts, they can't be mere binning of the mass produced notebook processors.

So A TDP of 65 W isn't meaningful at all. The i7-4777R (3.2/3.9 GHz) also had a 65W TDP, the i7-4790S (can even turbo up to 4.0)

What does that sentence mean? TDP has the same meaning TDP has had in the past.

If you compare 4th and 5Th Generation Intel Core i7, on ark.intel.com you see that across the board even the high end 47W mobile chips all went down in clocks. Maybe trading graphics for peak performance is all well and good for mobile device TDP, but certainly not for the desktop. According to Tomshardware the iGPU consumption amounts to less than 20W (Torture test minus CPU Only).

To be more precise, you'll see that base clocks increased substantially and turbo clocks decreased marginally. Intel substantially increased sustained performance, but wasn't able to increase peak performance. That seems to be a limit of the combination of Broadwell and 14nm. It appears that Skylake may rectify the problem.

For a while now Intel has been pushing consumers towards the low performance, form over function devices that are destined for quick obsolescence. Considering that iGPUs are only limited by memory bandwidth and considering that we get a massive boosts from DDR4 and texture compression (like seen in Nvida Maxwell). Skylake CPUs will be able to pack twice as many graphics EUs and maybe dedicate 30 W or more of their total TDP to graphics. Without any mention of G-sync / A-sync and with dodgy 4K interfaces at just 60 Hz, what a colossal waste it could turn out to be!

I think you have the supply/demand relationship backwards. Intel produces what consumers will buy. They wouldn't spend time and resources on additional graphics performance if consumers didn't want it. Maybe YOU (and a small percentage of the consumer market) don't. Intel is obviously pushing your niche of the market to the enthusiast line--which quite frankly seems appropriate as you are an enthusiast.
 
Aug 11, 2008
10,451
642
126
I dont understand the emphasis on graphics either. I think for the average consumer, pretty much any integrated intel igp is good enough, and they dont care about GT1, GT2, GT3e, GT4e, whatever. It almost seems like they are trying to outdo AMD at a strategy which has shown that strong (relative to other igps) igp performance is not a compelling purchase decision. The only reason I can think of is to drive higher resolution displays, but I dont really know how much performance is needed to drive a 1440p or 4k display for everyday use. And there is always gpu compute I guess.
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
I dont understand the emphasis on graphics either. I think for the average consumer, pretty much any integrated intel igp is good enough, and they dont care about GT1, GT2, GT3e, GT4e, whatever. It almost seems like they are trying to outdo AMD at a strategy which has shown that strong (relative to other igps) igp performance is not a compelling purchase decision. The only reason I can think of is to drive higher resolution displays, but I dont really know how much performance is needed to drive a 1440p or 4k display for everyday use. And there is always gpu compute I guess.

People do care about GPU performance, it is just conditional on power usage and CPU performance. Basically, thank mobile. All of a sudden we have phones with the GPU performance (at the same-better resolutions) than desktop computers and people (non-enthusiasts) are wondering why they are bothering with these big things in the first place.
 

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
Ok? And we already know that the 14nm cadence is screwed up. You aren't getting a proper shrink of Desktop Haswell and the 5775c isn't suppose to be it. Skylake will be out at the end of the summer with plenty high clocks. I don't understand what you're complaining about.


What does that sentence mean? TDP has the same meaning TDP has had in the past.
The TDP is also going up even though the clocks remain the same ~
www.cpu-world.com/Compare/588/Intel_Core_i7_i7-4790K_vs_Intel_Core_i7_i7-6700K.html

I bet Skylake is going to run hotter than Devil's Canyon, sure the performance is probably going to be 10~20% higher depending on the workload, but even at 14nm Intel isn't getting the kind of performance (thermals?) they'd be expecting, even with Skylake. Maybe a Skylake refresh will tell us if the 14nm node is reposnible for this or they're running too fast into the hard laws of physics :p
 

Dufus

Senior member
Sep 20, 2010
675
119
101
^^Usually if running hotter, TDP is reduced. Other things to run on the package other than the cores so maybe an increase for that.

Notebooks and all this mobile nonsense absolutely need integrated graphics just to drive high resolution displays. So after all low voltage chips have been binned, what do you do with the rest, well you create a new higher power target TDP, but rather low clocks to maximize yields. Voilà, Broadwell-C is born.

I cannot help think it is the other way round. Those chips that have poor thermals get down clocked and given a reduced TDP to keep within thermal specification.

From what I've seen ULV is a little misleading and really means ULP. Perhaps the low voltage refers to the lower core voltage used as a consequence of running lower clocks, not that it needs less voltage at the same frequency.
 
Last edited:
Aug 11, 2008
10,451
642
126
People do care about GPU performance, it is just conditional on power usage and CPU performance. Basically, thank mobile. All of a sudden we have phones with the GPU performance (at the same-better resolutions) than desktop computers and people (non-enthusiasts) are wondering why they are bothering with these big things in the first place.

Actually, I wonder why anybody would try to play more than a casual game on a phone. I have a 7 inch atom tablet, and downloaded a few old games that run fine. However, I dont play them because the screen is so small it detracts fron any enjoyment of the game. I also dont see the point of tbese super high resolutions on a phone, except for bragging rights.
 

Hulk

Diamond Member
Oct 9, 1999
5,168
3,786
136
Ok? And we already know that the 14nm cadence is screwed up. You aren't getting a proper shrink of Desktop Haswell and the 5775c isn't suppose to be it. Skylake will be out at the end of the summer with plenty high clocks. I don't understand what you're complaining about.



As these are edram parts, they can't be mere binning of the mass produced notebook processors.



What does that sentence mean? TDP has the same meaning TDP has had in the past.



To be more precise, you'll see that base clocks increased substantially and turbo clocks decreased marginally. Intel substantially increased sustained performance, but wasn't able to increase peak performance. That seems to be a limit of the combination of Broadwell and 14nm. It appears that Skylake may rectify the problem.



I think you have the supply/demand relationship backwards. Intel produces what consumers will buy. They wouldn't spend time and resources on additional graphics performance if consumers didn't want it. Maybe YOU (and a small percentage of the consumer market) don't. Intel is obviously pushing your niche of the market to the enthusiast line--which quite frankly seems appropriate as you are an enthusiast.


I may be wrong in this interpretation but I read the post as trying to get the point across as follow.

Intel had trouble with the 14nm process. Yields were low.
The best parts went into mobile products.
What was left were parts that wouldn't clock high and required higher voltage, not so good for mobile but "sellable" as 5th generation parts for the desktop.
Evidence for the last statement comes from the Haswell to Broadwell comparison where low voltage Haswell is same TDP as Broadwell and actually turbos higher. Kind of implying that these particular Broadwell desktop parts being released right now are not looking very good TDP-wise against Haswell.

I thought it was a good/interesting take.
 

crashtech

Lifer
Jan 4, 2013
10,695
2,294
146
I'd think they'd harvest chips away from mobile if they required too much voltage, not anything to do with clocks. Mobile is generally more limited on clockspeed than desktop.
 

dahorns

Senior member
Sep 13, 2013
550
83
91
I may be wrong in this interpretation but I read the post as trying to get the point across as follow.

Intel had trouble with the 14nm process. Yields were low.
The best parts went into mobile products.
What was left were parts that wouldn't clock high and required higher voltage, not so good for mobile but "sellable" as 5th generation parts for the desktop.
Evidence for the last statement comes from the Haswell to Broadwell comparison where low voltage Haswell is same TDP as Broadwell and actually turbos higher. Kind of implying that these particular Broadwell desktop parts being released right now are not looking very good TDP-wise against Haswell.

I thought it was a good/interesting take.

That might make sense if Broadwell desktop parts weren't Iris Pro with edram. They are. They can't be harvested from just any mobile part (at least I don't think it works that way).
 

WhoBeDaPlaya

Diamond Member
Sep 15, 2000
7,415
404
126
On average, I've always used the following :

Yorkfield -> Nehalem : +10% IPC
Nehalem -> Sandy : +10% IPC
Sandy -> Ivy : +3% IPC
Ivy -> Haswell : +9% IPC
 

know of fence

Senior member
May 28, 2009
555
2
71
^^Usually if running hotter, TDP is reduced. Other things to run on the package other than the cores so maybe an increase for that.

I cannot help think it is the other way round. Those chips that have poor thermals get down clocked and given a reduced TDP to keep within thermal specification.

From what I've seen ULV is a little misleading and really means ULP. Perhaps the low voltage refers to the lower core voltage used as a consequence of running lower clocks, not that it needs less voltage at the same frequency.

There are numerous simple reasons why this can't be the case.
1. Intel's Mobile First strategy (meaning release and priority).
2. Higher price of various mobile parts (378-632$) and they don't even have a shiny nickel-plated copper lid.
3. Physics. Lowering operating voltage is something that allows for Moore's law to go-forward, ULV chips are more advanced in this sense. Mobile chips absolutely need less voltage to run at the same frequency, in fact they have lower voltages throughout the full range of frequencies and operating temperatures. Their battery life depends on it.
4. Comparing the i5 to the i7, both have the same TDP but different clocks. Which is an indication of just how huge the differences and variation in silicon quality (=voltage) can be.

It could also be a good sign that the desktop parts were released this late with a price hike, meaning that fabs run fine, without inadvertently producing huge piles of voltage deficient leftovers.
 

Dufus

Senior member
Sep 20, 2010
675
119
101
Physics. Lowering operating voltage is something that allows for Moore's law to go-forward, ULV chips are more advanced in this sense. Mobile chips absolutely need less voltage to run at the same frequency, in fact they have lower voltages throughout the full range of frequencies and operating temperatures. Their battery life depends on it.

You are missing switching current.

Example of BDW i7-5500U vs HSW i7-4700MQ stock voltages

9vehyr.png


Have you actually checked voltage and frequency between the DT chips and mobile chips, or gone even further and tested best undervolt?

Here's an example running Cinebench 11.5 with an early HSW desktop i3-4130 using a -220mV offset and set to 2.7GHz to run at the same bench frequency of an i7-4500U.

211sdoh.png


The desktop chip runs at a maximum vcore of 0.776V while consuming less than 7.5W of package power and core power less than 4W. BDW did improve things by allowing core power a larger share of the package power. Laptops do however strive for power savings while idle.

Can you or anybody else show an i7-4500U using less than 0.776V while running Cinebench 11.5 at 2.7GHz? I'd be very surprised if you/they can. I do not have an i7-4500U myself but did do some testing with a BDW i7-5500U and while that scored 3.32 in Cinebench 11.5 it ran on the edge of 15W with a -50mV offset. The DT i3-4130 running at 2.9GHz (same bench clocks as i7-5500U) with -220mV offset scored 3.18 and used a pkg power of 8.6W and core power of 4.8W. The lower score of 3.18 a result of less instruction throughput.

There does seem to be a problem with voltage scaling for higher frequencies. Hope a solution will be found with SKL/CNL.
 

coercitiv

Diamond Member
Jan 24, 2014
7,395
17,539
136
Can you or anybody else show an i7-4500U using less than 0.776V while running Cinebench 11.5 at 2.7GHz? I'd be very surprised if you/they can.
My i7 4510U uses 0.919V @ 2.7Ghz stock, and I don't think I can undervolt more than 50mV on offset. Fixed voltage might take it lower, but I doubt I can even approach 0.8V, let alone go under.

Also, my 4700HQ uses less voltage than 4510U and undervolts with bigger offset.

Later edit: just tried static 0.82V @ 2.7Ghz and system hanged after Cinebench started :)
 
Last edited:

know of fence

Senior member
May 28, 2009
555
2
71
You are missing switching current. Example of BDW i7-5500U vs HSW i7-4700MQ stock voltages

Have you actually checked voltage and frequency between the DT chips and mobile chips, or gone even further and tested best undervolt?

Here's an example running Cinebench 11.5 with an early HSW desktop i3-4130 using a -220mV offset and set to 2.7GHz to run at the same bench frequency of an i7-4500U.

The desktop chip runs at a maximum vcore of 0.776V while consuming less than 7.5W of package power and core power less than 4W. BDW did improve things by allowing core power a larger share of the package power. Laptops do however strive for power savings while idle.

Can you or anybody else show an i7-4500U using less than 0.776V while running Cinebench 11.5 at 2.7GHz? I'd be very surprised if you/they can. I do not have an i7-4500U myself but did do some testing with a BDW i7-5500U and while that scored 3.32 in Cinebench 11.5 it ran on the edge of 15W with a -50mV offset. The DT i3-4130 running at 2.9GHz (same bench clocks as i7-5500U) with -220mV offset scored 3.18 and used a pkg power of 8.6W and core power of 4.8W. The lower score of 3.18 a result of less instruction throughput.

There does seem to be a problem with voltage scaling for higher frequencies. Hope a solution will be found with SKL/CNL.

It took me a while to understand your numbers, but you seem to have done the measurements and you even posted a V/clock curve, something I intented to talk about, but you actually went there and produced one, wow. I wanted to say that a CPU has a multitude of those curves, not just between different cores but also at different temperatures. Something one could test by lowering fan speed.

9vehyr.png


This graph is baffling. did you plot it yourself? Did you use the the same diagnostic (voltage reporting) software like in the cinebench screenshot?
The simple explanation probably is that the 15W i7-5500U chip is semi-passively cooled running much hotter in a mobile device, than a properly cooled 47 W or 54 W CPU. You are testing it well within Turbo range, meaning very hot and basically outside of TDP specs.
Even though you run single core tests, comparing 2/4 to a 4/8 chip is obliviously problematic, because they are essentially cut from different wafers and undergo completely different selection processes,
The i3 is much better matched, but I'm completely out of my depth in regards to VID and offsets. When is it added and how it is reported in software. Am I supposed to compare your 0.768 V to the 0.9 to 1 Volts (@2.7GHz) that I extrapolate from the first graph? You aren't exactly producing a side by side comparison here. Still it is interesting.

Also last core i3 (ivy Bridge) I fiddled with, had a 10-15°C difference between cores, so you may also just have an exceptionally good chip.

The massive V-differences you recoded may show just how big an effect cooling can have on selected VID. Or maybe the 14 nm node really requires higher voltages, which would be a very damning explanation why Broadwell is slower. What's your take on temperature/voltage connection?
 

Dufus

Senior member
Sep 20, 2010
675
119
101
@coercitiv, thanks for the post and results.

@know of fence, I used my own software for measurements. HWiNFO would show much the same results. Nothing ran outside of Intel specs.

I can only test with what hardware I have at hand, which unfortunately is very little (or maybe fortunate). Differences between cores is less than 10mV.

Offset is applied across the board. If at bin x you have a default core VID/voltage of 1.000V and have applied a -100mV offset then it will be 0.9V.

While some of the DT boards also provide an analogue reading of core voltage the mobile systems are pretty much stuck with only reading VID/voltage from the core registers.

The voltage difference between HSW and BDW was discussed earlier in the Broadwell thread as was temperature effects. If using dynamic core voltage then higher temperatures result in lower core VID/Voltage. Well, at least that is the case on the HSW chip tested. IIRC current is also supposed to have an effect.

First of all AFAIK there is no way to physically measure the IVR voltages, only register values from the CPU, how accurate are they at providing true voltage?

Secondly these register voltages change significantly with temperature in HSW, about 50mV decrease for 60C increase at 24 multi on my i7-4700MQ. Perhaps a reason why a BSOD can happen with undervolting when coming off high load. Never got to test this on BDW but wouldn't be surprised if similar.

Power readings read from RAPL are estimates. Again, how accurate are they? Already posted earlier that these can be manipulated to show a package power of 80W show 0.5W.

A better way would be to measure power delivery to the CPU from board and take into account static as well as dynamic power. Perhaps it is the static power savings that allow overall package power savings to give increased performance per Watt.

Certainly BDW idle power looks impressive over HSW, not so much on full load.

http://forums.anandtech.com/showpost.php?p=37191850&postcount=1884
 

Dresdenboy

Golden Member
Jul 28, 2003
1,730
554
136
citavia.blog.de
Nice graph. It would be nice if someone could help to add more datapoints (not necessarily whole curves) just to get a clue of the variation.