How does Ethernet get it's clock?

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
All the talk about clocking means it's happy fun digital communication education time! So how does Ethernet correctly set clock at these blazing speeds of 100 Mbs, 1 or 10 Gbs?

Well you could think the transmitter and receiver would have their own internal clocks and just sample based on that? Now that wouldn't work as they could never be guaranteed to be synchronized. Thankfully Ethernet frames have a Frame Checksum Sequence (FCS) which is nothing more than a bitwise hash of the entire frame, that way the receiver knows if the frame has been modified. The receiver runs the same hash on the entire frame, if the FCS is good it knows the frame is intact. If not, it's tossed and considered "bad". This is why "the network" can NEVER corrupt data, there is built in error checking.

So about that clock...on the front end of every single ethernet frame is what's called the Preamble. It's a seqence 64 bits of ones and zeros (high/low voltage) in a specific pattern that is allows the receiver to synchronize it's clock for this specific frame. 10101011110000101100, something like that I forget the exact sequence but you get the idea.

So the preamble is what sets the receivers clock, and every single ethernet frame includes these 64 bits. When folks talk about normal ethernet frame size of 1514 bytes this does NOT include the preamble. This is why you can never achieve 100% utilization on ethernet. The preamble is always there and is not part of the frame, at best using maxium frame sizes you can only get 98.5 or so percent, it's even worse if the frame size is smaller because the preamble then becomes a larger percentage of the frame in question.

To learn more google Ethernet Frame Structure
 

ScottMac

Moderator<br>Networking<br>Elite member
Mar 19, 2001
5,471
2
0
And it is the preamble that caused one of the earlier limitations to "how many devices" the frame can pass through ... each receiver takes some of the bits to sync up ... those bits are lost ... they were not part of the frame as it continued its journey.

With enough repeaters in the path that the receiver could no longer sync, and the frame was lost.

Switches changed that, because they regenerate the entire frame (including new preambles) where the hubs/repeaters did not.

(there are / were other limitations as well related to "bit distance" in the cable, but that's not the topic)
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Was that the deal with the class 1 and class 2 hubs/repeaters of 100 Base-T? IIRC your maximum network diameter was like 2 or 3 hubs in a series, and it mattered greatly if if was class 1 or 2 (I think that is what it was called).
 

drebo

Diamond Member
Feb 24, 2006
7,034
1
81
I always thought the limit in the amount of hubs was a function of the size of the collision domain and the fact that it took too long to decide whether or not the wire was clear.

Interesting.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
I always thought the limit in the amount of hubs was a function of the size of the collision domain and the fact that it took too long to decide whether or not the wire was clear.

Interesting.

There's that as well. You aren't supposed to have "late collisions", which is a collision after 64 bytes. That normally means you have cable that is too long or too many hubs/network diameter too big. The transmitter assumes his frame is good if no collisions occur in the first 64 bytes.

There is a propogation delay because the bits don't move at the speed of light, more like 60% of it and the delay as the hub repeats the signal.

And interesting note, you can actually catch collisions with a sniffer or packet capture. You'll see the pre-amble pattern of the other frame at the end of the one it collided with.

But all this is nostalgic because switches and full duplex did away with any of those concerns. But in the high performance data center switching world we're actually getting back to "cut through" switching where the frame is switched as soon as destination MAC is received so it's technically generating and sending the frame as the ass end of it is still coming in.
 

drebo

Diamond Member
Feb 24, 2006
7,034
1
81
But all this is nostalgic because switches and full duplex did away with any of those concerns.

Right, but I feel it's just as important to understand why we have what we have today as it is to know how to use what we have today.

I have several colleagues who don't understand the basics of how Ethernet actually works or what the difference is between single mode and multimode fiber and why a kink would cause issues. Understanding ftw!
 

Evadman

Administrator Emeritus<br>Elite Member
Feb 18, 2001
30,990
5
81
I have several colleagues who don't understand why a kink would cause issues.

1. Because the Kinky Mod likes issues?
2. Because the 1's are pointy and get stuck?
 

ScottMac

Moderator<br>Networking<br>Elite member
Mar 19, 2001
5,471
2
0
There's that as well. You aren't supposed to have "late collisions", which is a collision after 64 bytes. That normally means you have cable that is too long or too many hubs/network diameter too big. The transmitter assumes his frame is good if no collisions occur in the first 64 bytes.

There is a propogation delay because the bits don't move at the speed of light, more like 60% of it and the delay as the hub repeats the signal.

And interesting note, you can actually catch collisions with a sniffer or packet capture. You'll see the pre-amble pattern of the other frame at the end of the one it collided with.

But all this is nostalgic because switches and full duplex did away with any of those concerns. But in the high performance data center switching world we're actually getting back to "cut through" switching where the frame is switched as soon as destination MAC is received so it's technically generating and sending the frame as the ass end of it is still coming in.

"You'll see the pre-amble pattern of the other frame" ...

It shows up as either a short string of A's or 5's (A is "1010" and 5 is "0101" in the binary representation of a Hex "nibble") depending on which bit started the fragment that the capture caught.

If the "sniffer" device doesn't point it out in the expert, you can see them in the Hex dump portion of the screen. This is why a straight text decode may not be enough ... each section of a capture trace can make certain types of problems easier to spot.

If learning how to read a decode interests you, one of the "best in the business," Laura Chappell has some great educational material on her site (http://www.packet-level.com) and she has co-founded "Wireshark University" which also has classes, courses, and learning materials available (http://www.wiresharktraining.com).

I have not taken the classes from WireShark U, but some friends have, and the say it's as good or better then the old Network General CNX courses (Network General: creators of the Brand Name "Sniffer" many years ago ... CNX is like a CCNP level of protocol decode).
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
The only reason I know about seeing the collision was from sniffer classes. Showing your age there scottmac.
 

ScottMac

Moderator<br>Networking<br>Elite member
Mar 19, 2001
5,471
2
0
Yeah, Old Fart Networker, that's me ... I have my walker WiFi enabled. ;-)
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Yeah, Old Fart Networker, that's me ... I have my walker WiFi enabled. ;-)

You can generally age people when they talk about networking. Ring this, FDDI that and when a router was a router, it routes, not some little box that does network address translation.

The reason why token ring didn't have a pre-amble was because the token itself had the clock.

Also the entire reason for the 1514 byte frame size was choosen because of collisions and back off timers. Now it's pretty much that standard size even on the internet.

I need to reach deep into the brain to remember how SONET get's it's clock. The whole protocol and technology is incredible.
 
Last edited: