Are two chips clocked at exactly the same speed, equal in speed?

Smartazz

Diamond Member
Dec 29, 2005
6,128
0
76
Lets say you had two identical Core 2 Duo's clocked at 2Ghz each. Lets say that these chips were at exactly the same speed down to the last hz and these chips had the same bus speed, same ram and same cache size. If you ran tests on these two chips, will benchmarks be different due to manufacturing differences between these two specific chips? Thanks in advance.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
No they won't be different in performance metrics as you have defined those characteristics as equality in your initial boundary conditions.

Metrics which are allowed to still vary between the two processors would entail power/performance as you did not specify any boundary conditions on xtor leakage or vcore necessary to maintain the required hertz.
 

imported_Tick

Diamond Member
Feb 17, 2005
4,682
1
0
I supose you might have in immesurably small difference in clock signal, but I would imagine that it would be increadibly small. Be nice to here from an EE. However, I have a hard time envisioning a metric that would not use the chips clock as it's own frame of reference, so I don't see how their could be a detectable difference.
 

Billb2

Diamond Member
Mar 25, 2005
3,035
70
86
Originally posted by: Smartazz...will benchmarks be different .
Yeah, the identical one will be a lot faster.............

Everything functions on clock ticks. Equal clocks = equal benches.

But, what is your real question?
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
I might be wrong, but isn't there some amount of redundance built into modern CPUs?
If so, one could imagine there being a very small difference in speed due to one chip using a different set of transistors for a given task than the other (due to e.g a faulty transistor)
 

Smartazz

Diamond Member
Dec 29, 2005
6,128
0
76
Originally posted by: Billb2
Originally posted by: Smartazz...will benchmarks be different .
Yeah, the identical one will be a lot faster.............

Everything functions on clock ticks. Equal clocks = equal benches.

But, what is your real question?

My question is basically, are there factors outside of the clock speed that effect the precise speed of two different chips.
edit: f95toli's answer is basically what I'm talking about, are there factors such as these that affect speed?
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
There is some redundancy which could lead to longer paths to accomplish the same thing, but you are talking about the difference in 1-2 cycles at the most. The thing I could see being a problem is memory access because it simply isn't deterministic, and start-up conditions could vary between systems causing slight differences. The performance should be the same to 99% of users and only display detectable differences under the most rigorous of testing. IMO.
 

BrownTown

Diamond Member
Dec 1, 2005
5,314
1
0
The answer is no, there is nothing inside which would affect performance, the two chips are EXACTLY the same logically speaking and will take the exact same time to do a given task down to the very last clockcycle. Thats the whole advantage of digial over analog, so long as both chips are functioning as specified they will alays get the EXACT same results out for a given set of inputs. Wheras with an anolog system things like vexternal temperature, electric fields etc.. can all affect the output with digital so long as the external noise is low enough that the states can be properly resolved you will have no distortion. What the clock signal is doign is "waiting" a certain amount of time for all the signals to settle, if the chip is 100 degrees C this might take 1.9ns, and take 1.7ns at -40 degrees C, but if the clockspeed gives it 2ns then it will work teh exact same no matter what. Obviously if you overcock to 1.8ns wiat time THEN you will get problems from external noise because temperature changes could effect reults.
 

Born2bwire

Diamond Member
Oct 28, 2005
9,840
6
71
Originally posted by: f95toli
I might be wrong, but isn't there some amount of redundance built into modern CPUs?
If so, one could imagine there being a very small difference in speed due to one chip using a different set of transistors for a given task than the other (due to e.g a faulty transistor)

I would be surprised if the redundant paths would increase the number of logic gates though. One path may be longer due to longer transmission lines and such, but such differences will not arise in terms of a clocked logic circuit.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Originally posted by: BrownTown
The answer is no, there is nothing inside which would affect performance, the two chips are EXACTLY the same logically speaking and will take the exact same time to do a given task down to the very last clockcycle. Thats the whole advantage of digial over analog, so long as both chips are functioning as specified they will alays get the EXACT same results out for a given set of inputs. Wheras with an anolog system things like vexternal temperature, electric fields etc.. can all affect the output with digital so long as the external noise is low enough that the states can be properly resolved you will have no distortion. What the clock signal is doign is "waiting" a certain amount of time for all the signals to settle, if the chip is 100 degrees C this might take 1.9ns, and take 1.7ns at -40 degrees C, but if the clockspeed gives it 2ns then it will work teh exact same no matter what. Obviously if you overcock to 1.8ns wiat time THEN you will get problems from external noise because temperature changes could effect reults.

1.9nS? In a modern CPU? Its an order of magnitude less time than that. Accessing memory is not deterministic, so it is possible that some RAM access could be slower than others depending on the state of the system.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: MrDudeMan
Originally posted by: f95toli
I might be wrong, but isn't there some amount of redundance built into modern CPUs?
If so, one could imagine there being a very small difference in speed due to one chip using a different set of transistors for a given task than the other (due to e.g a faulty transistor)
There is some redundancy which could lead to longer paths to accomplish the same thing, but you are talking about the difference in 1-2 cycles at the mos.

The redundancy doesn't really matter*. If you consider an L1 cache with a repair, the repaired path may potentially be longer, but for the chip to work, signals must propagate through the whole path before the clock tick. It's not a situation where a signal can arrive a little late, and continue on during the next clock cycle. If data arrives late, bad things happen.

*The redundancy could matter in a large cache with non-uniform access latency (maybe Montecito? ask pm). If you swapped in redundant banks that were farther away than the original banks cycle-wise, there could be a difference. However, I think that'd be difficult to implement (since the logic that figures out how long an access will take now has to be aware of repairs), and it would be easier to swap in a redundant bank that's roughly the same distance as the original.

You could theoretically get differences at clock-domain boundaries - for example, there are going to be synchronizers sitting between the logic that runs at the bus speed and the logic that runs at the core speed. A synchronizer is the one case I can think of where arriving late doesn't break things. You could theoretically have some crappy transistors on one side of the synchronizer on one of your chips, and fast transistors on the other chip, and if the data reaches those transistors at exactly the right time, the receiving side of the synchronizer would get the data on different cycles. However, I'm not sure you'd see that happen much in the real world because of testing/binning issues.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Originally posted by: CTho9305
Originally posted by: MrDudeMan
Originally posted by: f95toli
I might be wrong, but isn't there some amount of redundance built into modern CPUs?
If so, one could imagine there being a very small difference in speed due to one chip using a different set of transistors for a given task than the other (due to e.g a faulty transistor)
There is some redundancy which could lead to longer paths to accomplish the same thing, but you are talking about the difference in 1-2 cycles at the mos.

The redundancy doesn't really matter*. If you consider an L1 cache with a repair, the repaired path may potentially be longer, but for the chip to work, signals must propagate through the whole path before the clock tick. It's not a situation where a signal can arrive a little late, and continue on during the next clock cycle. If data arrives late, bad things happen.

*The redundancy could matter in a large cache with non-uniform access latency (maybe Montecito? ask pm). If you swapped in redundant banks that were farther away than the original banks cycle-wise, there could be a difference. However, I think that'd be difficult to implement (since the logic that figures out how long an access will take now has to be aware of repairs), and it would be easier to swap in a redundant bank that's roughly the same distance as the original.

You could theoretically get differences at clock-domain boundaries - for example, there are going to be synchronizers sitting between the logic that runs at the bus speed and the logic that runs at the core speed. A synchronizer is the one case I can think of where arriving late doesn't break things. You could theoretically have some crappy transistors on one side of the synchronizer on one of your chips, and fast transistors on the other chip, and if the data reaches those transistors at exactly the right time, the receiving side of the synchronizer would get the data on different cycles. However[/i], I'm not sure you'd see that happen much in the real world because of testing/binning issues.

I just asked someone and I was told Montecito does have uniform cache access timing. It uses a worst case (furthest block) access timing to accommodate for what you are talking about.

What I was mostly driving at with my latter post was the differences if you are accessing peripherals external to the CPU. Accessing RAM would most definitely be non-deterministic because the difference in time it takes to power on each system, regardless of the specs being the same, could cause one of the systems to hit the memory during a refresh cycle where the other system would not. Statistically, both systems should be within 1% of each other no matter what minute differences may exist, but on a case by case basis, especially in an OS environment, there is no way the systems would return exactly the same results due to inherent differences in the components because it simply can't be ideal. Internal to the CPU, though, would be a different story. It definitely should be the same if the CPU specs are the same.

Oh, and about the comment I made regarding longer paths, that isn't right. Not sure what I was thinking.
 

BrownTown

Diamond Member
Dec 1, 2005
5,314
1
0
Originally posted by: MrDudeMan
Originally posted by: BrownTown
The answer is no, there is nothing inside which would affect performance, the two chips are EXACTLY the same logically speaking and will take the exact same time to do a given task down to the very last clockcycle. Thats the whole advantage of digial over analog, so long as both chips are functioning as specified they will alays get the EXACT same results out for a given set of inputs. Wheras with an anolog system things like vexternal temperature, electric fields etc.. can all affect the output with digital so long as the external noise is low enough that the states can be properly resolved you will have no distortion. What the clock signal is doign is "waiting" a certain amount of time for all the signals to settle, if the chip is 100 degrees C this might take 1.9ns, and take 1.7ns at -40 degrees C, but if the clockspeed gives it 2ns then it will work teh exact same no matter what. Obviously if you overcock to 1.8ns wiat time THEN you will get problems from external noise because temperature changes could effect reults.

1.9nS? In a modern CPU? Its an order of magnitude less time than that. Accessing memory is not deterministic, so it is possible that some RAM access could be slower than others depending on the state of the system.

Dude man its an example, it coulda been 5 years and the point is still the same.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Originally posted by: BrownTown
Originally posted by: MrDudeMan
Originally posted by: BrownTown
The answer is no, there is nothing inside which would affect performance, the two chips are EXACTLY the same logically speaking and will take the exact same time to do a given task down to the very last clockcycle. Thats the whole advantage of digial over analog, so long as both chips are functioning as specified they will alays get the EXACT same results out for a given set of inputs. Wheras with an anolog system things like vexternal temperature, electric fields etc.. can all affect the output with digital so long as the external noise is low enough that the states can be properly resolved you will have no distortion. What the clock signal is doign is "waiting" a certain amount of time for all the signals to settle, if the chip is 100 degrees C this might take 1.9ns, and take 1.7ns at -40 degrees C, but if the clockspeed gives it 2ns then it will work teh exact same no matter what. Obviously if you overcock to 1.8ns wiat time THEN you will get problems from external noise because temperature changes could effect reults.

1.9nS? In a modern CPU? Its an order of magnitude less time than that. Accessing memory is not deterministic, so it is possible that some RAM access could be slower than others depending on the state of the system.

Dude man its an example, it coulda been 5 years and the point is still the same.

I know, I just figured you would have used a more up-to-date example since he was talking about Core 2 in the OP. Consistency is all. Nothing personal.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
What it comes down to is that if the program executing is stand-alone and fits in the cache, it should be exactly the same. If you start accessing RAM or run an OS, you basically have very little chance of finishing the task at exactly the same time because of various things like interrupts, context switches, etc.

Also, we didn't even mention the hard disk. If you bring the hard disk into play, there is no way the program will execute at exactly the same speed or number of clock cycles because the hard drives most definitely spin up at different rates or are accessed while the heads are at different places.

In an ideal world, it would execute at the same speed, but it isn't, so they don't.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
This very reasonable thread degraded into Heisenberg uncertainty limits.

Let's just take it to its obvious conclusion and state the facts that since neither CPU is a wholly isolated and discreet quantum wavefunction AND simultaneously neither CPU belongs to an irreducible class of identical elementary particles (aka fermions) we can safely conclude they are not identical and this cannot be expected to produce identical results (eignevalues from observations made on said eigenfunctions) except on occasions of pure statistically improbable chance.

There, I think I may have successfully degraded this tit-for-tat thread to a low enough common denominator that we can agree the OP should effectively measure no difference outside the statistical accuracy of the measuring benchmarks themselves.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Originally posted by: Idontcare
This very reasonable thread degraded into Heisenberg uncertainty limits.

Let's just take it to its obvious conclusion and state the facts that since neither CPU is a wholly isolated and discreet quantum wavefunction AND simultaneously neither CPU belongs to an irreducible class of identical elementary particles (aka fermions) we can safely conclude they are not identical and this cannot be expected to produce identical results (eignevalues from observations made on said eigenfunctions) except on occasions of pure statistically improbable chance.

There, I think I may have successfully degraded this tit-for-tat thread to a low enough common denominator that we can agree the OP should effectively measure no difference outside the statistical accuracy of the measuring benchmarks themselves.

You could be sarcastic, or realize that what we have been talking about would definitely be measurable. It may not be a big difference, but noticeable depending on your definition of precision. You can leave your attitude somewhere else.

The RAM and hard disk thing I mentioned would definitely have an impact on the execution/performance difference between "identical" systems.

He is also talking about benchmarks, which by nature are run in an OS environment. There is almost no chance of the systems producing the result at "exactly" the same time in an OS environment as complicated as Windows.

As I said previously, if you can fit the whole test in cache, then it should be the same. Otherwise, it statistically will be close but not the same, and that has nothing to do with your sarcastic quip.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: MrDudeMan
Originally posted by: Idontcare
This very reasonable thread degraded into Heisenberg uncertainty limits.

Let's just take it to its obvious conclusion and state the facts that since neither CPU is a wholly isolated and discreet quantum wavefunction AND simultaneously neither CPU belongs to an irreducible class of identical elementary particles (aka fermions) we can safely conclude they are not identical and this cannot be expected to produce identical results (eignevalues from observations made on said eigenfunctions) except on occasions of pure statistically improbable chance.

There, I think I may have successfully degraded this tit-for-tat thread to a low enough common denominator that we can agree the OP should effectively measure no difference outside the statistical accuracy of the measuring benchmarks themselves.

You could be sarcastic, or realize that what we have been talking about would definitely be measurable. It may not be a big difference, but noticeable depending on your definition of precision. You can leave your attitude somewhere else.

The RAM and hard disk thing I mentioned would definitely have an impact on the execution/performance difference between "identical" systems.

He is also talking about benchmarks, which by nature are run in an OS environment. There is almost no chance of the systems producing the result at "exactly" the same time in an OS environment as complicated as Windows.

As I said previously, if you can fit the whole test in cache, then it should be the same. Otherwise, it statistically will be close but not the same, and that has nothing to do with your sarcastic quip.

You amuse me with your immature response, but if you read the OP's post it quite clearly intends to exclude the secondary effects of the rest of the computer system's components.

Try to stay on subject and you won't become so irritated when others aren't swayed to engage in your pointless side-discussion.

If you had one computer, and swapped out the "identical" CPU's...then what would the resultant benchmarks results deliver from sequentially benchmarking the two "identical" CPU's in said truly identical computer infrastructure?

The benchmark results still would not be identical because of run-to-run variation that would be observed even if the CPU's had never been swapped.

But is the benchmark result variation attributable to the CPU's being swapped out? Not at all. If the CPU's are identical then their performance is identical, the OP asked a self-consistent question, same as asking if 4 = 4 then does 4 = 4? There is no sarcasm here, but your ego is being applied where it need not be.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Originally posted by: Idontcare

You amuse me with your immature response, but if you read the OP's post it quite clearly intends to exclude the secondary effects of the rest of the computer system's components.

Try to stay on subject and you won't become so irritated when others aren't swayed to engage in your pointless side-discussion.

If you had one computer, and swapped out the "identical" CPU's...then what would the resultant benchmarks results deliver from sequentially benchmarking the two "identical" CPU's in said truly identical computer infrastructure?

The benchmark results still would not be identical because of run-to-run variation that would be observed even if the CPU's had never been swapped.

But is the benchmark result variation attributable to the CPU's being swapped out? Not at all. If the CPU's are identical then their performance is identical, the OP asked a self-consistent question, same as asking if 4 = 4 then does 4 = 4? There is no sarcasm here, but your ego is being applied where it need not be.

Sorry if you think there was ego involved, but there really wasn't. I have no ego problem as I am very aware of what I do and do not know. I actually walked around the building and asked several people this exact question and posted the sum of their statements.

This is the highly technical forum, so we took the topic to a highly technical level. You took it to an absurd level. The fact remains that, as I said previously, if the test was run only from cache, the results should be exactly the same. If there is any I/O to any other peripheral, they will be different. Beyond answering the OPs question in a different way than you but equally valid, he may have actually learned something from that.
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
Originally posted by: f95toli
I might be wrong, but isn't there some amount of redundance built into modern CPUs?
If so, one could imagine there being a very small difference in speed due to one chip using a different set of transistors for a given task than the other (due to e.g a faulty transistor)

There's parity checking on many structures when possible (which is rare), and caches have ECC, but I haven't seen any redundant logic in the section I work in. It's much cheaper to toss a small, broken processor than to make huge processors that always work.

Going back to topic, non-deterministic factors such as clock skew will be able to cause very minute variations in throughput with two identically clocked processors, even if all other things are considered equal. If there are asynchronous elements in the design (i.e. montecito big cache) then no exact correlation should be expected on the cache itself, however, it is possible for cache access latency to have no effect on throughput, depending on the code being run.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
dmens, I don't think clock skew would have an effect here (not counting skew to the asynchronous elements - I'd lump that in with "random variation affecting asynch stuff"). The number of ticks a given flip flop sees per second doesn't depend on skew. I agree with the rest of what you said.

Sorry if you think there was ego involved, but there really wasn't.
{snip}
This is the highly technical forum, so we took the topic to a highly technical level. You took it to an absurd level.
I thought Idontcare was trolling and ignored him. Smartazz said "identical", and I think that's what you and I answered.
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
Originally posted by: CTho9305
dmens, I don't think clock skew would have an effect here (not counting skew to the asynchronous elements - I'd lump that in with "random variation affecting asynch stuff"). The number of ticks a given flip flop sees per second doesn't depend on skew. I agree with the rest of what you said.

Yes, that was poor wording, perhaps accumulated offset of clock over time due to manufacturing variations is a better way to put it. Guess I've been dealing with the leaf level of clock distribution too long.

What I'm thinking about is that even if the same bus clock is supplied to two cores with identical multipliers, variations in the PLL would result in a slightly different core clock for both, hence a performance difference. I believe the AMD X2 "dual core hotfix" that released way back was meant to address this, because the slight difference caused the timestamp counters in the two cores to get out of sync and confused the OS.