• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Are all 10/100 LAN cards the same speed?

Muse

Lifer
I bought a cheap 10/100 LAN card (8139) at a computer show a few years ago and got the idea that it was slow, so I bought a Linksys LNE100tx to replace it. The Linksys seems to have died today, so I put the cheapie back in. Do some of these cards work faster? Thanks for clarifications.
 
Yes, I hear the intel 10/100 cards get really good throughput, but if its on PCI anyways it doesn't really matter, except the better cards usually use less CPU. If its on PCIe, PCI-X it might be worth it for throughput.
 
The cheapie NICs have to do scatter-gather operations in software, which requires more CPU cycles. Given a sufficiently advanced CPU, there will be no difference in throughput.
 
Originally posted by: Odeen
The cheapie NICs have to do scatter-gather operations in software, which requires more CPU cycles. Given a sufficiently advanced CPU, there will be no difference in throughput.

My CPU isn't the latest or greatest in this system: AMD Athlon XP 1700+ CPU

Would that be powerful enough to compensate for a cheapie 10/100 PCI NIC without compromising the power of the system?
 
No, given that today's computers have enough horsepower to not be crippled by the additional CPU utilization.

With gigabit cards there are substantial differences.
 
Muse, the RTL8139 has a brain damaged architecture that requires a lot more CPU overhead and delivers poor performance. They really suck, and you should avoid them.

The LNE100TX might use a RealTek in its current form. Linksys likes to switch chips without changing the part number.

A well architected NIC will reduce the CPU overhead and will deliver better network throughput than a poorly architected NIC. A faster CPU will not necessarily help a poorly architected NIC not suck.

I recently saw Intel Pro/1000MT NICs for $21 online, plus a few bucks shipping. That's a well performing and reasonably architected NIC, and pretty darn cheap. Given that, why people insist on using the $5 RealTek NICs is beyond me.
 
Originally posted by: cmetz
Muse, the RTL8139 has a brain damaged architecture that requires a lot more CPU overhead and delivers poor performance. They really suck, and you should avoid them.

The LNE100TX might use a RealTek in its current form. Linksys likes to switch chips without changing the part number.

A well architected NIC will reduce the CPU overhead and will deliver better network throughput than a poorly architected NIC. A faster CPU will not necessarily help a poorly architected NIC not suck.

I recently saw Intel Pro/1000MT NICs for $21 online, plus a few bucks shipping. That's a well performing and reasonably architected NIC, and pretty darn cheap. Given that, why people insist on using the $5 RealTek NICs is beyond me.
Thanks for the tips. I'm just using the RealTek because it's the only one I have at the moment. I did once replace it with the Linksys, but as noted the Linksys died.
 
I'm not sure about speeds, but a decent NIC > cheap NIC. I had enough messing around with craptastic stuff, and I try and stick with Intel Pro series gear (Gig and 100). They "just work" and you dont' have to fight nearly as much. That is imho of course.
 
Originally posted by: nweaver
I'm not sure about speeds, but a decent NIC > cheap NIC. I had enough messing around with craptastic stuff, and I try and stick with Intel Pro series gear (Gig and 100). They "just work" and you dont' have to fight nearly as much. That is imho of course.

Well, in my other PC I have this:

Kingston EtheRx PCI Fast Ethernet Adapter (DS21143, based on Intel 21143) PCI

Windows 2000 sees it as an Intel 21143. I've had the card since I bought it from Pacbell when I got set up with DSL around 5-6 years ago. How good is that card? I could put the el cheapo card in my other box - it's seldom used. Thanks.

PS My everyday box is Beauty in my sig. The seldom used box is Truth.
 
Originally posted by: Muse
Originally posted by: nweaver
I'm not sure about speeds, but a decent NIC > cheap NIC. I had enough messing around with craptastic stuff, and I try and stick with Intel Pro series gear (Gig and 100). They "just work" and you dont' have to fight nearly as much. That is imho of course.

Well, in my other PC I have this:

Kingston EtheRx PCI Fast Ethernet Adapter (DS21143, based on Intel 21143) PCI

Windows 2000 sees it as an Intel 21143. I've had the card since I bought it from Pacbell when I got set up with DSL around 5-6 years ago. How good is that card? I could put the el cheapo card in my other box - it's seldom used. Thanks.

PS My everyday box is Beauty in my sig. The seldom used box is Truth.

Intel 10/100 NICs were generally unmatched in quality. If the kingston is using an Intel chipset, it's probably pretty damn good. 🙂
 
Muse, the Kingston uses a Digital Semiconductor "Tulip" chip, which were very good designs. Better than the Intel. As part of the Compaq buyout of DEC, DS got divested and bought up by Intel. Intel the killed the DS network chips - which is why they went from popular to gone very quickly.

The Intel chips perform well in normal operation, but are kinda brain damaged about how you push configuration changes into the controller chip. In particular, if you're doing a lot of work with multicast where you'd join and leave groups often, that could be a problem. For most people, it's not a big deal.

The Tulip chips inspired a long list of buggy Taiwanese clones. For a while, most cheap NICs used one or other Tulip clone. Macronix, Davicom, ASIX, ADMTek, and Lite-On for example. Your Linksys may be one of these clones. The clones all had various bugs that made them more or less capable clones. Some of their bugs made them little better than the RealTek. Some of them were basically as good as the real thing.

Then RealTek came along, and they were simply cheaper than everyone else. Plus, the margin was quickly falling out of 10/100 chips as gigabit became afforable by mere mortals. The Tulip clones gave way. It got to the point where anyone interested in making a quality part was making it for a 10/100/1000 part; if you only wanted 10/100, it was because you were super cheap, and the RealTek was the cheapest of the cheap.

Then RealTek slapped a gigabit PHY on their sucky architecture and released a gigabit chip. It sucks just as much as their 10/100 chip that can hardly do 100Mb/s. But it allows cheap motherboard manufacturers to claim to have gigabit on-board, and most customers don't look at exactly what the chip is - most buyers don't know what to look for. Similarly, the $10 gigabit PCI NICs are usually the RealTek chip. It's gigabit, right? It must be good 😉

If you're more technically inclined, I strongly suggest folks read *BSD / Linux driver source code for various parts when you're making purchase decisions, especially when you're buying many of them for your company. You'll find that free software folks aren't shy about commenting about what's good and what's bad. And you'll notice that some NICs have clean and relatively straightforward drivers, and some have a *LOT* of bug work arounds, special cases, waits, etc.
 
Excellent read cmetz, love the history. I don't dispute anything you say, but would like to add the following counterpoint.

First, the impact of CPU utilization is going to extremely different when you compare a production server vs. a home computer. In a production server, you probably need to optimize every cycle, and tossing them away in the networking is going to hurt. In a home computer, you can also want to save CPU cycles, and I know some enthusiasts will have cases where it really matters, but in many cases, people will not see a significant difference. So on a production server, it probably makes sense to study NIC's and drivers in minute detail to get best performance / CPU cycle and not worry about even hundreds of dollars, whereas in a consumer setup, generally the reverse will be true.

Second, CPU utilization is going to scale with load. If nothing's coming through the pipe, then the CPU will be mostly idle. So in a case such as internet downloads at typical consumer rates, CPU's are going to be mostly idle. Even when you stress a 100 Mb/s line, a modern CPU is probably not going to see much load from networking.

With gigabit, things change further, but I contend that for the typical consumer, it's still not a huge issue, whereas the reverse is true with production servers.

I have on hand 3 budget consumer gigabit NIC's, which I've measured. (Dirt cheap with rebates is fairly accurate. No slight is intended to the $20 Intel -- if I'd seen one at the time, I'd probably have bought it.)

1. MachSpeed / SysKonnect SK-9521 using a Marvell chip.
2. TrendNet using a RealTek 8169_8000 family chip.
3. Built-in NVidia NForce 430 gigabit MAC with Marvell PHY.

Throughput measured using TTCP benchmark.

1. 89 MB/s
2. 98 MB/s
3. 113 MB/s

CPU utilization for above transfer measured via PerfMon.

1. 19%
2. 52%
3. 38%

4.5 GB file transfer to RAID timed in script.

1. 53 MB/s
2. 60 MB/s
3. 62 MB/s

4.5 GB file transfer to single IDE timed in script.

1. 44 MB/s
2. 44 MB/s
3. 45 MB/s

(There is some trashing in this case, in waiting for the drive/flushing the cache, I presume. These numbers were observed, but there was variability -- these are representative.)

So what do we conclude? From a CPU utilization / server perspective, the Marvell/SysKonnect card is best, the NVidia comes in second, and the RealTek/TrendNet last. (NVidia 100% worse, RealTek 174% worse than the Marvell.)

From a throughput perspective, NVidia comes first, RealTek second, and the Marvell last.

NVidia comes first from one "balanced" perspective -- highest throughput with middle CPU utilization, and also first when CPU utilization does not matter, as is the case often for a consumer, e.g. during simple file transfers.

Now when you consider the IDE transfer case, which is more typical, all of the network cards come out roughly the same in throughput, and the CPU utilization impact will also be lowered due to reduced throughput. In this case, the choice of cards doesn't matter at all, except for extra dimensions such as compatibility, and more importantly consideration about potential saturation of the PCI bus in some circumstances.

I don't present these numbers and conclusions as definitive. There are lots of other choices and variations out there, and we haven't seen what the Intels can do for example, and details in hardware, drivers, or software or test data might skew things differently.

However, in this case, for me, the conclusions are different from what you might advise from a server/commercial perspective. If I didn't have built-in gigabit, and cared mostly about large file transfers, I'd take the RealTek over the Marvell, despite its poor CPU utilization. I find it strange defending an apparently poor implementation, but that's where my logic takes me...

- with apologies to the OP if he finds this uninteresting.
 
Madwand1, be careful. Doing *anything* with the network (or the disk) inherently uses the CPU. Some, but not all, of the increased CPU load of the RealTek and NVidia NICs are just because they're moving the bits faster. It's not a linear relationship in real-world systems, so typically you need a lot of data points and a graph, and look at who's noticeably above or below the curve.

The difference between the RealTek numbers and everyone else pretty well illustrates the RealTek's problem - it uses more than double the CPU of the Marvell, and well more than the NVidia. Benchmarks aside, I know the programming interface of the chip, and it just isn't good as the others. The Yukon is a very good architecture that Marvell crippled to save money, but still performs rather well. That NIC is quite sensitive to drivers. The NVidia is a reasonable architecture that they licensed (from Conexant?), and where it sits in the system also gives it a performance boost.

I have done fairly extensive benchmarks using commonly available gigabit NICs, but my numbers are now dated - OSs and drivers have evolved enough since the time I did them that I need to retest. I believe the rank from best to worst throughput was Intel, Alteon Tigon2, Broadcom Tigon3, Marvell Yukon, and RealTek. I have a Via to benchmark, too, but don't expect much from it. The Yukon has a new Linux driver that's supposed to deliver a whole lot better performance than the one I tested with.

In a true server environment, I don't worry about CPU usage that much, if the throughput is there. In many server applications, I have more CPU capacity than I know what to do with, and it's I/O that's restraining me (what's 10% extra CPU on a 2x dual core Opteron? I've got plenty of CPU, but I have to keep 'em fed). That isn't an excuse to waste cycles, though.
 
Thanks for the reply and further info on these NICs'.

I know that servers can vary as much as the differences between commercial usage and home usage. My own experience includes servers which were very much CPU bottlenecked, so I will assert that CPU utilization can be cirtical in some cases, but of course some servers will be I/O bound, at least some times, maybe often. In this case, perhaps inefficient CPU utilization for more thoughput could win, but I'm not going to press this further -- a poor implementation could be out of line for a commericial installation for many reasons other than just throughtput performance, and if you have the money, why not get a NIC that's decent in all respects.

My basic point was that a NIC which is clearly inferior in some server installations can be superior (RAID file transfer, not counting the NVidia) or indistinguishable (IDE file transfer) in a home installation.
 
Another strange wrinkle in my saga of the NICs:

I've been reading this thread with interest and decided I'd put my Kingston KNE100TX card (seen as an Intel 21143 by Windows 2000) back in my primary machine. I did this an hour ago. I put it in the slot adjacent to where my NIC card (whichever one) was previously, because that's right next to my video card and I want that card to have maximum ventilation (The Kingston card stands tall). I put my lower profile firewire card where the NIC used to go, next to the video card. The Kingston NIC worked fine, as did the 8139 card when it was in the machine, the last few days).

However, 20 minutes ago, Windows stopped detecting a connection. It said the card was working fine, but there was no cable connected. I recycled the modem and router, still no connection. Rebooted, same thing. Booted to my alternate Windows 2000 partition, also no connection detected. I turned on my other machine (I'm using it now), which is connected to the same router, and it's connected. This is very like what happened with my Linksys NIC a few days ago, which started this whole thing. The only thing I can think right now is a mainboard problem, but I'm no expert in these things. Does anyone have a take on this? Thanks!

Edit: I'm using Truth right now, the problem machine being Beauty - specs to both are linked in my sig.
 
Originally posted by: Madwand1
One possibility to try and eliminate is a cable or port on the router going bad.

Right. I thought the next step might be to swap the cables - plug the one going to Truth into Beauty and vice versa and see it the problem with Beauty goes away. If so, the cable or port on the router would be suspect. Thanks.
 
Originally posted by: Muse
Originally posted by: Madwand1
One possibility to try and eliminate is a cable or port on the router going bad.

Right. I thought the next step might be to swap the cables - plug the one going to Truth into Beauty and vice versa and see it the problem with Beauty goes away. If so, the cable or port on the router would be suspect. Thanks.

I shut down both machines, swapped cables as described above and now both machines are connected. Maybe an intermitent cable problem. Also intermitent router port problem is conceivable. 😕

I can, of course, swap out the cable, however I only have one router. But I'm only using two of the connections on the router, so I could change that. Likely I put the connectors on the Cat5 cable myself, so that could be the problem.
 
Just to throw some more experience into this mess, I have to disagree with all the applause about Intel NICs.

Yes, Intel's higher end or performance class gigabit NIC's are nice cards. Last I checked they were still stomping the competition on server benchmarks. Otherwise, I can't stand 10/100 Intel cards because they use a bajillion different chipsets and require a lawyer and forensic expert to determine what driver to load. 3Com's much over rated 905 series is the same way. Only ones that seemed to work are the B-series.

I've had surprisingly little hassle with Netgear's FA series.

Otherwise, for 10/100 use I keep a stash of 3c980's because even while the 905's are a pain in the neck, the higher end 380 is a much more stable card with consistent driver set for windows.

I don't mind the integrated Nvidia NIC's, and prefer them if I need to go that route.

 
Originally posted by: Muse
I can, of course, swap out the cable, however I only have one router. But I'm only using two of the connections on the router, so I could change that. Likely I put the connectors on the Cat5 cable myself, so that could be the problem.

That will bite you every time (homemade cables). It can cause very unusual problems that just don't make any sense.

And in that respect some nics are better than others in being able to deal with an out of specifications/homemade cable.
 
spikespiegal, I don't know what you're talking about. On every OS I've seen, the Intel 10/100 PCI NICs all use the same driver. The Intel 10/100/100 NICs all use the same (but different from the 10/100) driver. Under *BSD, it's fxp (10/100) or em (10/100/1000). Under linux, it's eepro100 (10/100, there's another driver but don't use it!), or e1000 (10/100/1000). Under Windows, well, I guess it's the same driver, that's what Intel advertises, ask a Windows person 😉

Intel has gone to good lengths to make their chips rather uniform, so one driver can support them all. They are probably the single best vendor in the marketplace about NOT making you use a bunch of different drivers.

I simply don't know where or how your experience is so different. Now, their old ISA NICs were a whole different story. But all around those were the bad old days.

3Com is one of the naughty vendors when it comes to chip revisions that require new drivers every time. Netgear too, but at least they have the excuse that they're changing Taiwanese vendors whenever they can save a penny. Oh, you have the FA311TX version 3? That's a different driver! Blech. 3Com made their own chips with a few exceptions (the Tigon based gigabit ones).
 
Before the question comes up... my personal favorite is the Tigon 2. They're discontinued and replaced with the Tigon 3, which are improved in some ways and not as good in others. I only push the Intels because they're readily available for cheap. I mean, come on, $21 for a good 10/100/1000 PCI NIC - it's just not worth it to buy a cheaper NIC. Tigon3 NICs are harder to come by and more expensive retail, they tend more to be in OEM applications (on-board NICs in servers).
 
Where do the Broadcom NICs stand in this? I suspect crap, but curious as all of our Dells have them (except for mine, I insisted on an Intel chip)
 
Back
Top