• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

What is a "clock cycle?"

aplefka

Lifer
Okay, this is a pretty basic question I'm sure, so I expect some decent answers. When articles talk about a "clock cycle" what exactly are they referring to? How does a CPU cycle? When I hear that word, I think of something that goes in a circle repeating itself over and over; is that kind what a cycle is? I figure it's how long it takes it to complete something, but I'm not sure what. Any help?
 
Each processer operates at a particular frequency. Well a specific frequency is the number
of cycles per second that the device is working at. For example my AMD 3400+ operates
at 2200Mhz which is 2.2 billion cycles per second, so one cycle takes 1/2.2 billionths of a
second to complete. That is one clock cycle for my CPU. Depending on the time it takes
to complete instructions within the CPU it could take anywhere from 1 cycle to X cycles .
Hopefully this is the information you were expecting.
 
A cycle is referring to a fetch-execute cycle
The CPU is constantly doing this fetch-execute prosses
Fetch the data from memory, decode the instruction, execute it.
 
Originally posted by: AnnihilatorX
A cycle is referring to a fetch-execute cycle
The CPU is constantly doing this fetch-execute prosses
Fetch the data from memory, decode the instruction, execute it.

That != clock cycle.
 
A clock goes tick, tock, tick, tock. The CPU's clock goes "1", "0", "1", "0", performing operations in sync with that clock. A "cycle" usually is when a signal performs a whole period of its sequence, in this case from "1" to "0" or vice-versa. The length [time] of this period can be measured, and thus a frequency can be calculated.
 
You can think of the way that a processor pipeline works as an assembly line at, for example, an equipment manufacturer. At the front of the assembly line you have some guy getting the main parts from the parts bins. He goes over and fetches the parts and pulls them onto the assembly line which is a movable conveyor belt. Now imagine that this conveyor belt actually stops and starts rather than moves continuously. So as opposed to see the stuff constantly moving by, it stops in front of a station, waits for a fixed period of time for the guy at that station to complete his task, and then moves on.

Imagine that stage 2 on this assembly line is a guy who's job is to attach a heatsink to the parts that were loaded on in the first stage. This job takes him about 2 minutes. So in order for him to actually get the job done, the belt needs to stop with a part in front of him for a minimum of two minutes.

This is an analogy for a CPU. A CPU gets instructions and data from memory, does some operation on them, and then writes them back to memory. It breaks these three taks into numerous substeps - much as you would in an assembly line. The belt stopping and starting is the clock. How often it stops and starts is the clock cycle. So here, our amazing belt is clocking at 0.008Hz (1 Hertz is one "thing" per second. So here we are stopping and starting once every 2 minutes, so that's ( 1 min / (60s x 2) ) ).

Now if you tried to raise the clock frequency of the belt, guy #2 on the line is not going to have enough time to do his job, and he will send only a partially attached heatsink on to guy #3. Guy #2 will start introducing errors into the line because he doesn't have enough time to do his job. So if you are the line supervisor, the only way you could raise the speed of the belt is to decrease how much work everyone does so that the new belt speed is still slower than the slowest person on the line.

Patrick Mahoney
Senior Microprocessor Design Engineer
Intel Corp.
Fort Collins, CO
 
A clock cycle is basically a synchronization signal. It's a signal that's suppose to simultaneously (and keep in mind I say supposed to) tell parts of the circuit "go ahead on the next task". The most basic example is that of a flip flop. One of the inputs to a flip-flop is a clock signal. When it gets this signal, it updates the data inside it. It holds the data otherwise. In a microprocessor, pipeline stages are divided by using registers (multiple flip-flops) and these registers have a clock input. When a clock signal is sent, these registers update their data (which is usually the next instruction). The clock signal is a method so that all the registers (dividing all the pipeline stages) can (somewhat) simultaneously update themselves so the instructions and data flow from one stage to the next. The amount of time you have to wait before issuing another clock signal (telling the registers to update) is a clock cycle. Obviously the faster the circuit is between the registers, the faster your clock cycle can be.
 
Okay, so this confirmed at least one thing for me: a clock cycle is a very short amount of time, as in such a short amount of time it's unimportant for me to know how long a clock cycle for my various systems are. Right?

Another question I now have, based on the first response: in theory a 3400+ (2.2 GHz) can handle how many tasks in a second? 2.2 billion?

And also, relating to the flip-flop thing, is that why supercomputers are measured in teraflops?
 
I believe the confusion regarding clock cycles is a result of pipelining.
Before pipelining, a clock cycle was the progression of a single instruction through all the various stages in the core.
Only when that particular instruction had been retired could another instruction enter the processor core and another cycle begin.
With pipelining, instructions could enter the processor pipeline in rapid succession, corresponding to each iteration along the pipeline, or clock pulse.
These iterations are referred to as clock cycles, because at the end of each, an instruction (or two) is retired, once the pipeline is full.
 
Originally posted by: aplefka
Okay, so this confirmed at least one thing for me: a clock cycle is a very short amount of time, as in such a short amount of time it's unimportant for me to know how long a clock cycle for my various systems are. Right?

Another question I now have, based on the first response: in theory a 3400+ (2.2 GHz) can handle how many tasks in a second? 2.2 billion?

And also, relating to the flip-flop thing, is that why supercomputers are measured in teraflops?

A clock cycle incorporates the whole time a clock signal is low and the whole time that it is high. Some logic makes calculations on both "edges" of the clock (the transitions from low to high and from high to low). Your computer's processor clock cycle is the inverse of its clock speed (e.g. your 2.2Ghz computer is 2.2 billion clock cycles in one second). It is important to know your clock speed, but it doesn't directly translate to performance.

A 2.2Ghz processor won't handle 2.2billion instructions in one second, as each clock does not handle the processing of a whole instruction. I could explain to you pipelining, out of order execution, superscalar, etc. but suffice it to say that it's very complicated to determine exactly how many clocks it typically takes to execute one instruction. Or, in the case of modern superscalar CPUs how many instructions can execute per clock 🙂.

The "flip-flop" of a clock has nothing to do with the term teraflop. A "FLOP" is defined as one Floating Point Operation Per [Second]. So a Teraflop would be one trillion floating point operations per second. Floating point operations are not really used by office and internet type programs but are heavily for, e.g. games and graphics.
 
Maybe the term "cycle" comes from the circular nature of the waveform that is used as a clock signal, perhaps something like a sine or square wave (I don't know myself), going from 0 to 360 degrees and then back to 0.
 
Originally posted by: interchange

A 2.2Ghz processor won't handle 2.2billion instructions in one second, as each clock does not handle the processing of a whole instruction. I could explain to you pipelining, out of order execution, superscalar, etc. but suffice it to say that it's very complicated to determine exactly how many clocks it typically takes to execute one instruction. Or, in the case of modern superscalar CPUs how many instructions can execute per clock 🙂.

Pipelining allows the completion of an instruction each clock pulse.
A particular instruction will of course take a minimum number of clock cycles to complete, equal to the number of pipeline stages, but don't forget, there will be another instruction right behind it.
Let's look at an example:

On a simple 4-stage pipeline, a single instruction will take a minimum of 4 cycles/iterations to complete.

Stage 1: ?
Stage 2:
Stage 3:
Stage 4:

? represents an instruction in the first pipeline stage, at the first cycle.

Looked at from the perspective of a single instruction, completion takes 4 cycles.
So what if we introduce a second intruction?
Will two instructions take 2 X 4 cycles to complete?
No.

Stage 1: ? I2
Stage 2: ? I1
Stage 3:
Stage 4:

Instruction 1 will be complete in four cycles, but because instruction 2 is trailing I1 by one clock cycle/iteration, it will complete one cycle later.
So in five cycles, two instructions will be complete.
In both the above examples, the pipeline has only been partially full, so the result has not been one instruction per cycle.

Now, if the pipeline is full, we get this:

C1 Stage 1: ? I6
C2 Stage 2: ? I5
C3 Stage 3: ? I4
C4 Stage 4: ? I3
C5                 ? I2
C6                 ? I1

Once we reach Cycle 4 (C4), we're getting an instruction completed each clock cycle.
Completion rate at this point is therefore one instruction per cycle.

This is pipelining.


 
Originally posted by: Bassyhead
Maybe the term "cycle" comes from the circular nature of the waveform that is used as a clock signal, perhaps something like a sine or square wave (I don't know myself), going from 0 to 360 degrees and then back to 0.

Ideally, it's a square wave, but most real clocks I've seen scoped looked a lot closer to a sine 😉.
 
Sometimes Ars Technica is beyond me which is why I asked here. Thanks for the link, I'll look through it and see if I can understand it.
 
And also, relating to the flip-flop thing, is that why supercomputers are measured in teraflops?

No, flip flops are storage elements. "Flops" in the sense of performance is an abbreviation for "floating point operations per second". A floating point operating is math involving non-integers (numbers we'd represent with a decimal point, like 1.5). The FLOP rating of a computer gives you a general ballpark estimate of how fast it will be at certain tasks.

Originally posted by: aplefka
Okay, so this confirmed at least one thing for me: a clock cycle is a very short amount of time, as in such a short amount of time it's unimportant for me to know how long a clock cycle for my various systems are. Right?
No, a given architecture with a higher clock speed (shorter cycle) will perform faster. Even though 1 nanosecond is a very short time, half a nanosecond is shorter, and 2GHz CPUs are usually a lot faster than 1GHz CPUs.

Another question I now have, based on the first response: in theory a 3400+ (2.2 GHz) can handle how many tasks in a second? 2.2 billion?
AMD chips can do 3 floating point operations per cycle (3 FLOPs), so your CPU can do 6.6 GLFOPS (in theory). The maximum sustained performance of an Athlon or Opteron would be 3 instructions per cycle (for the sake of this discussion, let's say an x86 instruction is an operation).
 
Originally posted by: BitByBit
Before pipelining, a clock cycle was the progression of a single instruction through all the various stages in the core.
Only when that particular instruction had been retired could another instruction enter the processor core and another cycle begin.

No.

Before pipelining, one instruction took many clockcycles to complete.
Typically 7 or more. Floating point ops going up to 30 or more.
It is true that the processor worked on one instruction at a time. But that is more the "cycling" of an instruction. The clock is used to synchronize and drive logic switching.

A clockcycle is a synchronisation of a latticework of switches, changing to a new logical state. This is not an instantenous process, as lots of electrons need to travel between emittors and bases, to make transistors open or close. Also this toggle needs to travel through entire chains of transistors.
 
Back
Top