How does Intel bin same-class processors if they all overclock so well?

GundamF91

Golden Member
May 14, 2001
1,827
0
0
Just wondering how Intel decides to bin the same-class processors (E2xxx, E4xxx, etc), into 8x, 9x, or 10x. Considering that within the same-class processors, low multiplier ones seem to overclock nearly as well as higher multiplier ones, and that they all overclock much more than their spec, shouldn't Intel try to sell all the chips as higher spec ones?
 

rchiu

Diamond Member
Jun 8, 2002
3,846
0
0
no, cause there will be people who only want to pay $60 bucks for a CPU and Intel need to sell some processor at that price point. At the same time, they need to sell it at lower spec at that price point or else no one will pay $150, $200+ for higher spec cpu.
 

coldpower27

Golden Member
Jul 18, 2004
1,677
0
76
Originally posted by: GundamF91
Just wondering how Intel decides to bin the same-class processors (E2xxx, E4xxx, etc), into 8x, 9x, or 10x. Considering that within the same-class processors, low multiplier ones seem to overclock nearly as well as higher multiplier ones, and that they all overclock much more than their spec, shouldn't Intel try to sell all the chips as higher spec ones?

Not really, because that would mean lower yields, what a chip will overclock for to doesn't represents what Intel can guarantee over several years. Hence the more conservative binning, as well Intel is supposed to compete with AMD not crush them to oblivion.

As well like the above poster says not everyone can afford the expensive processors there needs to be something for as many price points as possible.
 

Aluvus

Platinum Member
Apr 27, 2006
2,913
1
0
2 reasons:

1. They may opt to bin everything below its potential, either to give them headroom later if they need to start selling higher-performance parts (this generates more media coverage than launching faster parts right out of the gate) or to ensure that yields are very high.

2. Individual parts may be binned below what they are capable of (for instance, a part that could comfortably be sold as an E6420 might be sold as an E6320) in order to meet immediate demand. This means Intel can't charge as much, but also ensures that orders don't go unfilled.
 

Zap

Elite Member
Oct 13, 1999
22,377
2
81
Originally posted by: GundamF91
shouldn't Intel try to sell all the chips as higher spec ones?

Not everyone overclocks. Certainly not the majority of computer users who just buy a Dell or HP, and not even everyone who builds their own will overclock.

Also, if Intel just marks them all at the highest speed, then how will they price the chips? Will they price them at the highest price, thus limiting sales? Will they price them at the lowest price, thus losing margin?
 

GundamF91

Golden Member
May 14, 2001
1,827
0
0
I think that's true that Intel may be intentionally lowering the bar so that they have something to compete with in the low end, while not diluting the faster processor's price point. Otherwise they'll either have to sell the faster processors at a lower price which would be bad for their profits on high end ones. But if they won't lower the price, then they'd be turn away from the low cost market, and would leave themselves open for AMD to take over that entire segment.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
There could be other possibilities:
1. The chips are already close to the TDP when operated at the maximum allowed temperature and voltage. In that case, even if they'd work at faster speeds, they'd consume too much power when run faster.
2. There's a lot of margin built in because Intel expects significant slowdown of the transistors due to aging effects (NBTI, hot-e in particular), so the chips will OC really well for a few years and if kept at low voltages and temperatures.
3. The chips won't last long enough if clocked higher. If the metal wiring in some circuits wasn't thick enough to carry large amounts of current, the frequency would have to be limited to prevent the chips from dying after just a few years. Alternately, the voltage might be limited by wearout effects like TDDB and operating at a higher voltage would result in unacceptable failure rates after a few years (even though they work fine for a while).
4. There are critical paths in logic that isn't normally used. For example, x86 has a feature called "segments" which aren't used by modern OSes. If a critical path occurred in logic like this, most people wouldn't notice failures.
5. There are critical paths in logic that doesn't break things. For example, if the branch prediction logic doesn't work properly, the CPU would still work fine, but slower.

4 and 5 seem unlikely to me.