Is any amount of Packet Loss on a LAN Normal?

mud

Member
Oct 24, 2000
49
0
0
Our Access 2000 MDB back end file on a File Server is getting corrupted after a client gets a ?Network Error? on a daily basis.

SO, I thought I would test the network the following way. While copying a large file over the network to and from my file server, I found a small amount of packet loss during the transfers during continuous pinging.

Is packet loss under these circumstances expected and acceptable? Would it cause Access database corruption?

Oh yeah, I would like to thank you guys in advance for taking the time to even read this post! All comments and suggestions are welcome. :)

Hardware: I chose two clients to test with the file server.
1. File Server ? NT4, Linksys Etherfast 100 Mb card
2. Client A ? Win2k, Linksys Etherfast 100 Mb card
3. Client B - Win98se, 3Com Etherlink 3 card 10Mb

Client A (w2k) is connected to the File Server like so
Client A---100Mb Dlink 16port Switch---10 Mb Hub--10Mb Hub---100Mb Linksys 5port switch--- File Server.
Client B (win98) machine is connected through 2 cascaded Linksys 100Mb 5port switches. like so
client B ---
All systems have TCPIP, IPX, and NETBeui installed.

Method:
From each of the two clients I was testing from, I copied a 640MB zip file TO the File Server, and a second time FROM the file server separately.
During all file transfers, I was continuously pinging the File Server from each client machine at all times.
Only one Client was copying to/from the File Server at a time.

Results:
Both machines showed packet loss whether it was the one copying the file or not.
The amount of packet loss was variable, ranging from as little as 1 or 2 or as much as 12 during the file transfers.
When there was no network activity or file copying occurring, pinging was fine and there was zero packet loss.

Conclusion and suggestions:
fill in here. :)
 

mud

Member
Oct 24, 2000
49
0
0
What's a real dependable and cheap way to test the cable? replace it? How do i know the one that I replaced it with is any good to begin with. :)
 

BoberFett

Lifer
Oct 9, 1999
37,562
9
81
Probably the cable. Just find a known good cable, swap it out, and see if you still have the problem.

Without spending money on a CAT5 tester, trial and error works.

Edit: A friend of mine had made some cables, and had dropped packets on a LAN. Turned out that he although the wire layout on each end matched, he didn't correctly keep the twisted pairs together and he was dropped packets were the result of crosstalk. I told him to make a cable the correct way and the problem vanished.
 

mud

Member
Oct 24, 2000
49
0
0
Thanks for all the tips on wiring. I'm gonna go change the cables with some properly made home grown ones. :)

How much would it be for that cat5 tester equipment and would it be worth it?

*****
Also, it's still not clear to me. When copying such a huge file to the server, is it normal to see a few dropped packets now and then? Or is any loss unacceptable when you're on a local switched lan at 100Mbs.
****

btw, what is the definition of crosstalk?

THANKS! :)
 

R0b0tN1k

Senior member
Jun 14, 2000
308
0
0
Gah! Don't use homemade cables! You're almost guaranteed to have problems with those. For server use, definitely use some certified pre-made cables.
 

BreakApart

Golden Member
Nov 15, 2000
1,313
0
0
Few questions:

-How many clients are on this network? (just curious)
-Whats the total run distance between clientA to the Server? (just curious, with all those repeaters it looks like you got about 400+ meters?) Are these computers in the same city? lol
-Do you actually "need" all (3) of those protocols? (mulitiple protocols actually multiply the number of packets being sent, in your case for every (1) needed packet, it has to send (3) packets-basically increasing your likely-hood of a collision by 3X's)-(in long distance communications it is even MORE important to decrease the number of packets sent-thus we use only (1) protocol in WAN situations)
-pinging while you are sending packets, may have actually caused your packet-loss, due to the 3X's packets already sent, and now you sent a 4th-you actually simulated how underpowered your network is in it's current config)
-The 10mb Hubs are choking your network right in the middle, no sense having 10/100 switches if the choke point runs at 10mb/half-duplex. (so you have a 400+ meter signal traveling at half-duplex, with 3X's as much traffic as needed, multiplied by X-number of clients --the problem is easy to spot).


My suggestion:
-Reduce the number of protocols you are running down to TCP/IP(if possible) only (more universal in nature).
-The server needs a higher quality network card, that is certified for 100mb+ Full Duplex operation(3Com is the way to go). (this is Not required, but won't hurt)
-Either remove/replace the 10mb Hubs, or move them closer to the clients that have 10mb network cards(no sense choking the clients that have 100mb-full duplex cards), better to be at the slow clients, then choking the center of a fast network.

Hope this helps...
Long day... time to go home...

If you know how to make a cable, then making your own is the way to go for sure.... Don't let anyone fool you, "certified" cables go bad also-and they are generally made from cheap materials... Worse case, you have to redo your cables if they go bad, better than buying all new ones just to repair the ends... Thats just silly...
 

LordOfAll

Senior member
Nov 24, 1999
838
0
0
Well using all those hubs and switches you will violate the 5-4-3 rule transfering from one client to another. I would highly recommend you try to minimize the # of switchs/hubs. Those 2 10Mbit hubs would be a good place to start.
 

LordOfAll

Senior member
Nov 24, 1999
838
0
0
Also, as a temp solution till you get it working right, you can clutch everything downstream of the 10Mbit hubs down to 10Mbit half duplex if your clients mainly communicate with only the server. You aren't getting any performance out of the 100Mbit runs anyway.
 

mud

Member
Oct 24, 2000
49
0
0
wow. quick question. if i have 3 protocols bound to my linksys adapter, I am guaranteed that it's sending out 1 packet of each type totalling 3 packets? does it matter if the other computer im connected to has 1 or all three protocols bound to the adapter?

For instance, If I transfer a 100MB file with 1 protocol bound, Im sending out 100MBS of packets (excluding packet overhead).


If I have 3 protocols bound, is it sending out 300MBs of packets?

100 TCPIP + 100 IPX + 100 NetBeui = 300MBs?!

Is this regardless of whether the other computer has all 3 protocols bound or not?

When I get more time, ill give you more details about our screwy setup here. gotta run now, sorry.

but you guys f'n rule!!! Thanks for the great tips!!! :)
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
About the multiple protocol thing.

Having three protocols will not cause 3 frames to be sent (one for each protocol). Once the session layer is established via name resolution and then netbios session start, all comm between these two machines use this protcol. The protocol priority comes from your binding order.

With that said, there year is almost 2001 --- RUN AN IP ONLY NETWORK.
-----------
Agree with the other guys, those two hubs are causing you severe collisions. That will happen when you have hubs between switches. Try to isolate the network down to just one or two switches and run the test again.

Fluke has a decent cheap tester called a one touch. Pretty nice for the price.

spidey
 

BreakApart

Golden Member
Nov 15, 2000
1,313
0
0
Spidey i am afraid you are mistaken... (tho i am not saying you are wrong, in many cases what you describe will happen)

(figured you wanted some more proof so i gathered this from the microsoft/technet web site)

"Choosing a protocol.
Windows 95 can support multiple network protocols, and can share a protocol among the network providers that are installed. You might choose more than one protocol to ensure communication compatibility with all systems in the enterprise. However, choosing multiple protocols can cause more network traffic, more memory used on the local workstations, and more network delays. You probably want to choose a single protocol wherever possible. The following briefly presents some issues for each Windows 95 protocol."

(then it goes on to detail the protocols, your welcome to read the entire section here--> Network Technical Discussion


(the following describes how the binding order works--basically says the highest one has priority-However, it does NOT turn off the other protocols, they are simply disregarded if they are not needed--wasted packets)

"One common method for setting up a network is to use NetBEUI plus a protocol such as TCP/IP on each computer that needs to access computers across a router. If you set NetBEUI as the default protocol, Windows 95 uses NetBEUI for communication within the LAN segment and uses TCP/IP for communication across routers to other parts of the LAN."

MUD---The 1 to 3 ratio i used doesn't happen the entire time the network is in use.(unless something is broken) However, anytime your computer sends the initial few packets they ARE sent using the 1-to-3 as i described, and anytime an error or delay occurs this will happen again and again to re-establish the connection. So i doubt 300mb would be sent, in your network i would estimate 15-20% more data was sent than needed easily. Combine all this with those hubs, as everyone has agreed ->"they need to go".

MUD--Please post back your results during your network reorganization. There are sure to be others that would like to see your progress...
 

shadow

Golden Member
Oct 13, 1999
1,503
0
0
Client A---100Mb Dlink 16port Switch---10 Mb Hub--10Mb Hub---100Mb Linksys 5port switch--- File Server

that's four networking devices.

802.3 specifies a maximum of three (don't know why exactly) that might be another factor leading to lost packets (besides those damn hubs.
 

BreakApart

Golden Member
Nov 15, 2000
1,313
0
0
Actually...
(taken from the cisco.com site...)

Max segment length(UTP) = 100meters
10baseT (# of hops/repeaters) = 4
100baseT (# of hops/repeaters) = 2 <--this gets tricky due to the max network diam of 205meters(recommended).

Table B-4
(hope this formats correctly---it didnt so i removed it)
Use the link above to find this table---> Table B-4: 10BaseT and 100BaseTX Guidelines


(something you may find interesting)
Ethernet

100BaseT Operation

100BaseT and 10BaseT use the same IEEE 802.3 MAC access and collision detection methods, and they also have the same frame format and length requirements. The main difference between 100BaseT and 10BaseT (other than the obvious speed differential) is the network diameter. The 100BaseT maximum network diameter is 205 meters, which is approximately 10 times less than 10-Mbps Ethernet.

Reducing the 100BaseT network diameter is necessary because 100BaseT uses the same collision-detection mechanism as 10BaseT. With 10BaseT, distance limitations are defined so that a station knows while transmitting the smallest legal frame size (64 bytes) that a collision has taken place with another sending station that is located at the farthest point of the domain.

To achieve the increased throughput of 100BaseT, the size of the collision domain had to shrink. This is because the propagation speed of the medium has not changed, so a station transmitting 10 times faster must have a maximum distance that is 10 times less. As a result, any station knows within the first 64 bytes whether
a collision has occurred with any other station.

Basically your network on the clientA to server segment was too large to operate at 100mb reliably, even if those hubs were not there. This is not to say it won't operate at 100mb, the key word is reliably...

Some people find this stuff boring... Not me :)


 

mud

Member
Oct 24, 2000
49
0
0
wow! im gushing from all these great replies and warm community support! Thanks! :eek:

I realized i left out the conifguration of Client B. MY APOLOGIES TO ALL!!!
It looks like this...
Client B <--> Switch_1 <--> *Switch_2* <--> Server (cascaded)
Client A <--> Switch_3 <--> hub_1 <---> hub_2 <--> *Switch_2* <--> Server

notice that Switch_1 and hub_2 are connected to the same *Switch_2*
All switches are 100Mbs (1 and 2 are linksys, 3 is dlink, 1 and 2 are cascaded)
Hubs are 10Mbs
I was still receiving some lost packets from Client B while copying with Client A or B to the server.

I guess I have a little explaining to do with Client A. (and no there aren't in different cities) ;)

I work in a warehouse, where Client A is located in the office. Client B is in Production. There is a cat5 wire length of at least 120 meters, (i can't follow the wires completely to figure out the true length, but it is way longer than the required 100m) We have about 20 clients at least through the building.

When connecting the office Dlink switch to the production Linksys switch, i get nothing or an intermittent light. But only when i put 2 hubs between the switches will they see each other. one hub doesn't do it. :(

Also, I don't see how much of a difference Client A can make to Client B since B is cascaded directly through 2 Fast switches to the file server. Unless Client A is somehow tying up the File Server and causing it to drop packets from Client B. (which i hope can't happen. :)

Since I now realize with all your helpful comments that there are a combination of things that may affect network stability, I'm not certain I'll be ablt to identify what precisely was wrong. It may be 1 thing, or it may be a few things working in conjunction. I'm just going to go ahead and implement all these ideas best i can in the meantime (eg: change wires, swap NIC's, reconfigure network protocols, etc.) You guys have given me some solid direction to follow.

Oh yeah, that nfo regarding multiple protocols is great guys! Interesting stuff. Now i have to figure out what to do... we got a novell file server and some jetdirec HP print servers running on ipx, but i guess i can go uninstall ipx from everyone who doesn't use the novell file server, and i guess i can configure tcpip on the printers and reconfigure all the clients... *groan* ;)

But once again, it's still not clear to me. Is any packet loss unacceptable under such a copy and ping process even if it's just a few packets? :confused:

are you guys able to cause any lost packets while copying files back and forth? how bout simultaneously?

uhm what's a 5.4.3 rule? :)

 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Breakapart-I agree 100%. I've analyzed WAY TOO MANY CAPTURES of Windows with multiple protocols, it is not pretty. How easily poor little windows boxes get confused.

About the 5-4-3 rule, that only applies to a single collision domain. A switch is a bridge and therefore ends the collision domain. Don't think this is his problem. Also his network diameter should be fine because of those switches.

Check distances, cables, switch/hub settings on ports. Or better yet, quit stringing that stuff together like that, only makes things complicated and run into problems. Make sure the switch ports are set to 10/half on the hub connections.

To answer the original question - Is any amount of Packet Loss on a LAN normal. NO!!!!!!!!
A &quot;properly running LAN&quot; will have no dropped frames. Sure, we can go into talking about collisions and their affect on throughput but in this day and age you shouldn't have any collisions anywhere on your networks. Shared media died about two years ago.

I know I'm being harsh here, just trying to relay good design principles.
cheers!


BREAKAPART - MARK MY E-MAIL...I LOVE THIS SH!T TOO
Text
 

BreakApart

Golden Member
Nov 15, 2000
1,313
0
0
Found a link you should review while planning your network fixes...(it does explain-what spidey had mentioned about collision domains)

Network Planning


The most important thing in designing/redesigning your network is PLANNING. Take your time gathering all the required information (distances-protocols-requirements) so you are not simply patching the problems again. :)

(i'm trying to avoid reading this UNIX book, and the Adobe Acrobat bible-can you tell i'm bored) :)
 

mud

Member
Oct 24, 2000
49
0
0
A friend of mine emailed me with some advice. Thought that some of you might find it useful.

-------------------------------------------------------------------
> Both machines showed packet loss whether it was the one copying the file
or not.
The amount of packet loss was variable, ranging from as little as 1 or 2 or
as much as 12 during the file transfers.

Packet loss can occur when you have:
faulty equipment (cables, switches, hubs)
overloaded network (which we will assume not)
too much electrical interference (again assume not, unless you know
wires passing near a generator of sorts...)
improper network specs (cables too long, not enough signal boosts)
bad operating system network stacks

I think a lot of people pointed to bad equipment as the culprit, which it
usually is. This is tough to do since you have so much equipment. But I
guess you have to try isolating each individual piece and test the load
across them. I would be testing in the order of: cables, network card, hubs,
switches.... as this goes from the item of most trouble to the least.

But also, make sure your network cables aren't too long...


> If I have 3 protocols bound, is it sending out 300MBs of packets?
> 100 TCPIP + 100 IPX + 100 NetBeui = 300MBs?!

When a server wants to broadcast a packet, it will send out a request to the
machine it wants to send it to. The request is sent in ALL protocols it
knows. Hence, if you have TCPIP, IPX, and NETBEUI, three different request
packets are sent. (Request packets are really small). The target machine
will receive one of the packets (the protocol depending on the binding order
of the network card) and send an answer in the like protocol. From the
there, the two machines do handshakes in the agreed upon protocol, and share
things like MAC Address, IP Address (if TCPIP), etc. Therefore, after the
initial request packet, the rest of the protocols are not really used. Since
request packets are very small, this does not overload your network very
much unless you have many machines on your network, all with multiple
protocols, and all making a lot of requests on the network... which would be
would be a big load even if you had one protocol...

PS... this is really simplified on how it really works, so don't trash me if
its not EXACTLY right...

Therefore, having multiple protocols on your network is not THAT
detrimental. If you really feel the need to remove protocols, I would remove
the IPX. Keep the NETBEUI since Windows machines talk much faster via
NETBEUI.


> Is packet loss under these circumstances expected and acceptable? Would it
cause Access database corruption?

All this assumes that the packet loss is causing the corruption. Packet loss
just means that the data packet was not received, and the target machine
SHOULD send another request to resend the packet. So, packet loss does not
necessarily mean data loss, although it does mean that the network will be
slower because the sender has to resend another packet.

Packet loss could be an indicator of faulty equipment. And faulty equipment
could mean Packet Corruption... Usually network cards contain ECC
correction, and if small corruptions occur, the ECC is able to handle it.
But if the corruption gets really bad, the network card can become
backlogged in trying to fix the errors, packets become packlogged, timeouts
occur, and packets get lost... hence Packet Loss.


Finally,
Going straight to the problem of Access database corruption. Access
databases can become corrupted in the following ways:
multiple transactions occuring on same data
closing the database when transactions are occurring
closing the database during a save or file access

Check to make sure your application is not prematurely closing the Access
database before all data can be written to it.


> btw, what is the definition of crosstalk?

I don't think anyone answered this one. Crosstalk is what happens when you
get electro-magnetic interference in a cable. The interference can be
external (cables running near an electric generator) or internal (magnetic
interference from internal wires in the cable). In Ethernet cables, the
crosstalk is minimized in the cable by twisting the wires in the cable. The
more twists per feet, the better the protection from crosstalk. Hence the
denomination Category 5 cables, which indicates X number of twists per feet
(I forget the number). Category 2 would have less twists per feet, while
Category 10 would have more.


Good Luck!

Ray
 

mud

Member
Oct 24, 2000
49
0
0
OK. This is a follow up to my original post.

The network is now stable and access fileserver does not get corrupt anymore on a daily basis. It's been about 2 weeks since our last crash. However, I haven't tested for lost packets like i mentioned previously.

There were two things done at around the same time that seemed to have fixed it. Although I can't prove which was the problem. (nor do i have the time to rigorously investigate it)

1. The nt file server had a borked installation of office 2000. Office 2000 would not install, nor would it remove itself completely giving cryptic errors. Very weird stuff. someone here eventually downloaded some microsoft exe that repaired the installation and everything ran smooth.

2. The switch that was connected to the server was placed very close to the monitor. So I moved the monitor away.


My guess is that the problem came from the monitor being too close to the cables and switch. Recently someone had unknowingly placed the server monitor one day on top of the cables, and we experienced an access corruption later that day in the afternoon. When I moved the monitor back, database corruptions went away.

So, everything is going smooth now. If i ever got time to kill, ill try the packet loss test again..... anywayz....

I learned alot from you guys!!! And I am eternally grateful!!!!
THANK YOU ALL!!!! YOU GUYS RULE!!!!!!!!!!!!!!!!!!!!!!!!!!!