Question on Ethernet latency between Store/Fwd and Cutthrough switches

BespinReactorShaft

Diamond Member
Jun 9, 2004
3,190
0
0
I have a general question regarding the Ethernet latency figures compared between:
- Store and Forward (SF)
- Cut through (CT)
modes in a Eth switches.

A network operator has recently performed testing on two Eth 100M switches connected via point to point link. The delay between a frame entering Switch 1 and exiting Switch 2 is measured. The results are as follows:
http://pics.bbzzdd.com/users/m.../SF_versus_CT_data.jpg

Their definitions of SF and CT are shown below:
http://pics.bbzzdd.com/users/ming2020/SF_versus_CT.jpg

I have looked at them and still do not understand. As I see it, SF should have larger latency compared to CT. For some reason the results show the opposite.

Would appreciate it lots if someone can help clarify on this. I must've missed something here...

Thanks.
 

ScottMac

Moderator<br>Networking<br>Elite member
Mar 19, 2001
5,471
2
0
The term "Latency" as it applies to switch-through times is somewhat controversial.

When Kalpana introduced Ethernet switching, they defined latency as the time from the first byte in until the first byte left the switch. This obviously put the S&amp;F crowd at a serious disadvantage.

The S&amp;F manufacturers decided to use the time from the last byte in until the first byte out to make their numbers a little more competitive / marketable.

Some also used last byte in until last byte out.

Some used First in, Last Out

It eventually boiled down to two major definitions: First in-First out (FIFO), and First in-Last Out (FILO).

Raw numbers are meaningless until you find out which method of eval the numbers represent.

In general terms, it represents the total time to get the info through the network or device.....delay.

FWIW

Scott


 

BespinReactorShaft

Diamond Member
Jun 9, 2004
3,190
0
0
Here's an observation from an Eth guy I know (credits where due of course):

"the diagram looks like a SmartBits measurement unit. Therefore I can give you an answer.

The device under test (DUT) is the same for the CT and the S&amp;F column in the diagram. Both columns are just to ways to look at the same measurement result. Your assumption was, that the diagram would show a measurement of a DUT with an internal S&amp;F architecture compared with measurements from another DUT with a cut-through architecture. This assumption is wrong.

The SmartBits device just has two definitions of latency.
The first definition is from the beginning of the ingress frame (entering the DUT) to the end of the egress frame (leaving the DUT).
The second definition is from the beginning of the ingress frame (entering the DUT) to the beginning of the egress frame (leaving the DUT).

The first definition is called CT by SmartBits and the second one S&amp;F (don't ask me why). The difference between the two values can be calculated as follows: Difference = FrameLength / Bandwidth.

If you look at the values in the table, the difference is exactly the frame length (assuming a bandwidth of Fast Ethernet)."


I guess we owe this bit of confusion to liberal (mis)use of terminology by industry experts. Sheesh.