NIC supporting Jumbo Frames 9000 MTU

trueimage

Senior member
Nov 14, 2000
971
0
0
I'm looking to pickup 3 identical NICs and use 2 in Win XP and eventually Vista boxes, and one on my NAS (unRAID) in linux.

My current setup involves a Cable Modem into my WRT150N router running DD-WRT, and then Cat 6 running out to a Gig-E switch (Netgear GS108 v2) which connects via Cat 6 to the 3 PCs. The router also connects directly to a vonage box, PAP2 I think.

Now, the GS108 switch supports Jumbo Frames with a 9000 MTU AFAIK.

This is a list of NICs known to work well with the unRAID software:

# Intel PRO/1000 Gigabit Ethernet (At least one report of this being broken)
# Marvell Yukon Gigabit Ethernet
# Netgear GA311 Gigabit Ethernet
# Realtek Gbit - RTL8169S-32 chip
# D-Link DGE-528T Gigabit Ethernet

I was leaning towars the Netgear GA311NA, because I like netgear products and I assumed they would work well together easily.

However, I can't find anything on netgear's website or newegg or ncix etc that says it supports Jumbo Frames.

This really is a must, as I'm serving large video files and copying stuff around a lot (4.5GB - 15GB files)

I want to get the best performance out of my network so that I'm not swearing at it (as I am right now, it is too slow.)

So, does the GA311 support Jumbo Frames with an MTU of 9000?

If so, is it a decent NIC?

If not what is my best choice?

Thanks!
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
You typically have to look at the underlying chipset, not the brand name of inexpensive NICs. In this case, the Netgear GA311 appears to based on a Realtek PCI chipset based on the following images:

http://www.newegg.com/Product/...x?Item=N82E16833122133

Realtek PCI is a particularly bad choice for gigabit NICs IMO, having high CPU utilization and max jumbo frame size around 7K.

Marvell PCI will also support 9K jumbo frames, but this is an old chipset with so-so performance.

The D-Link appears to be Realtek-based, but this information is obscured and there are different revisions of this NIC. Some other D-Links are Marvell-based.

The Netgear likely also has different hardware revisions. Sometimes these use different chipsets.

Intel is generally regarded as being best and generally supports 9K jumbo frames (and higher).

Further, if you can/can afford to, getting off PCI onto a PCIe NIC is generally recommended for two reasons: (1) Avoiding PCI bus and overhead issues. (2) Benefiting from newer designs / chips (although this is not a strict guarantee; sometimes things get worse/cheaper, it is the case that Marvell PCIe is better than Marvell PCI for example).

Getting off PCI is most beneficial when you also have other high bandwidth devices on the same PCI bus (e.g. a storage controller).
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,529
416
126
I doubt that any any of the Brand name check all their products lines togather in order to provide overall Brand compatibilty.

As Madwand1 it is all about the chipset.

In reality you can find components sold by the same Brand under the same model name but the chipset was changed sometime between version.

That shows that Brand and the Model numbers are not very relavant, since it is actullay a diffent product. I.e. forget about over all Compatibility, it could happened that even Brand 1 Model 2 v1 is Not the same as Brand 1 Model 2 v2.

That said, if you are using the NICs on a small peer-to-peer Network it probably would not matter which one you use.
 

robmurphy

Senior member
Feb 16, 2007
376
0
0
I had a D-Link gigabit PCI NIC for a short time. I borrowed it from a client. When I compared it to the PCI Intel Pro GT I found that it only gave about 2/3 of the throughput.

This was on a S939 based HP machine using an MSI-7184 motherboard. The Intel card would get to about 330 Mbs, the D-Link about 200 Mbs. These figures were obtained using iperf. The MSI motherboards do have a problem with PCI bandwidth so you may get better results. I know Madwand has seen much better than I get.

I think you will find that the D-Link uses the Realtek chipset so will have a limit on jumbo frames of about 7k.

I myself would stick to the Intel. I bought mine in OEM form, i.e no driver disk, box ect. Like this they cost less than the D-Link, netgear or other named brands. The drivers for the intel card were a quick download from Intel's website.

Rob
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,529
416
126
Do you really believe that on an End user peer-to-peer Network with client OS' the user would see a difference between MTU=7000 or MTU=9000?
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: robmurphyThis was on a S939 based HP machine using an MSI-7184 motherboard. The Intel card would get to about 330 Mbs, the D-Link about 200 Mbs. These figures were obtained using iperf. The MSI motherboards do have a problem with PCI bandwidth so you may get better results. I know Madwand has seen much better than I get.

Rob's measurements make me sad :(.. FYI, it's not MSI per se, and perhaps related to the old ATI chipsets being used. FWIW, I have an older nForce3/939 MSI motherboard which performs well.

I agree that in many cases, the performance differences between Realtek and Intel NICs will be negligible in practice to the user (*), and think there are at least two valid approaches here:

1. Get the cheapest known-to-work NIC which will do the job. This generally means off-brand Realtek-based NICs.

2. Get a NIC which is widely recommended, generally known to perform well, has a good feature set and drivers, etc., but is slightly pricey. I.e. an Intel NIC.

(*) These performance differences are easily demonstrated if you know what you're doing and set out to optimize performance, but the question is -- for a random semi-typical machine, will the difference be significant? Given average drive state/speed, tuning issues / OS limits, PCI bus limits, etc., odds are that most NICs will give around 30 MB/s actual file transfer performance under Windows, so spending a lot of money on the networking gear alone won't help much.
 

robmurphy

Senior member
Feb 16, 2007
376
0
0
Hi Trueimage

What network connections are you using on the machines at present?

Rob


.
 

trueimage

Senior member
Nov 14, 2000
971
0
0
Right now I'm using the onboard NICs.

Two are on Asus P5B mobos, and are Realtek 8111B / 8168. The latest driver supports 7k MTU.
The server is running on an old MSI mATX board I had laying around (part of the appeal of this setup). I forget the chipset, but this only supports 1500 MTU, nothing higher.

I want to use MTU 9000 because I thought you had to use the same spec for every piece of hardware in the setup. I was under the impressing my switch is MTU 9000, I don't know if I can set/change it or if it really is auto, up to 9000.

What I do know, is that the performance I get now needs improvement.

The server, running linux, writes to a drive and a parity drive on write.

I mount a windows share like this: mount -t smbfs //machine/share /mnt/foldername -o username=username

Then I copy a file with dd if=/mnt/foldername/file.ext of=/mnt/disk1/folder

The results I'm getting are 13-15 MB/s. Reads are probably 2x but I haven't tested it. At this rate a ripped DVD9 image takes over 600 seconds. I have 100s of DVDs. This is the problem. Also I may start the same practice with HD DVDs and/or Blu-ray once I get my dual drive. Again, this is an HTPC ripping to local disk and then copying OR ripping directly to the mounted server share.

I'd like to be up around 35-40MB/s, and I think that is attainable.

I was looking at the Intel Pro/1000 PCIe 1x version, and I do have at least 1 1x slot open on all three motherboards, so I may go that route. Another thing I have to do is swap my parity drive from IDE to SATA, but I had one in there til it failed and I had similar performance, it is the network. This will improve my top end, but it is not limiting it to the current 13-15 MB/s.
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,529
416
126
Let say that all the computers have Giga Cards and it set to 1500, using Peer-to-peer with client OS'.

The result might be a transfer of 20 to 30 MB/sec.

Raising all the MTUs from 1500 to 9000 will not result in a boost to 40MB/sec.
 

trueimage

Senior member
Nov 14, 2000
971
0
0
What do you think an attainable goal is then?
I'm willing to replace almost every part in the setup. I'd rather spend a few more dollars to have it work right than to have it work crappily (i don't think that is a word) and have spent 75% of that.

My drives do a parity sync at 65 MB/s
doing a raw throughput test right now using dd if=blah of=/dev/null I get 74 MB/s
I have cat6 cabling that I didn't make, molded boot.
I have a GigE switch.

It seems that the NICs are the weakest link?

As a side note, I've found an interesting link that may or may not be fully complete or up to date, regarding the actual MTU of certain parts. http://darkwing.uoregon.edu/~joe/jumbo-clean-gear.html

Any feedback is greatly appreciated, I'm not very knowledgeable in this area (more than the general populous obviously) so I'm just trying to find a solution that works.

The PCIe 1x Intel Pro/1000 NICs I'm talking about are here: http://www.ncix.com/products/i...TBLK&manufacture=Intel
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Before buying new network hardware, you might try to find out how well the current stuff is performing, and try to tune that higher if needed. "35 MB/s" is within the reach of any gigabit NIC. (However, that's likely to be faster than what a DVD is capable of.)

E.g. using iperf version 1.7:

server: iperf -s
client: iperf -c server -l 64k -t 15 -i 3 -r

Those are the parameters I suggest for Windows machines. You might need to try some different ones for *nix.

You could also try ftp as an alternative to SMB once you're OK with the performance of the underlying hardware.
 

robmurphy

Senior member
Feb 16, 2007
376
0
0
Hi Trueimage.

It may be that you are hitting limits of the onboard NICs, but I would not expect a gigbit PCI NIC based on the realtek chipset to do any better.

One test that you could do is to used iperf between the 2 windows machines. Try the test both ways, i.e. A -> B and B -> A. You can change the TCP window size and see what difference it makes. Try 64K as as far as I know that is the default TCP window size on XP for gigabit connections. You can then adjust the MTU size and see what difference it makes. You can also try various options on the cards e.g. RX buffers, TX buffers, TCP checksum offload on/off, ect. This should give an idea what the NICs and the switch are capable of. You may find some changes like the number of buffers can make quite a difference. Please note I'm basing this on the Intel NICs and I'm assuming the onboard NICs have similar options.

You can use an MTU lower than the switch supports without any problems, its when you go above that it will cause problems.

If you are still getting poor performance between the 2 windows machines then try connecting the machines directly using one of your patch leeds. You do not need a crossover cable for gigabit. Personaly I doubt the switch will make any difference as I have a Netgear GS608 and the switch gave the same reults as connecting the machines directly.

If the performance between the 2 windows machines is good, then you really need to get iperf on the server machine. Another check to make is the TCP window size on the server machine. If thats only 8k it may explain some of the problems. This is where iperf is good as it allows you to try different window sizes. How you will get iperf for the server, and how you check the TCP window size I'm not sure.

If you are going to get seperate NICs the as Madwand has said get the PCI-E, and avoid the normal PCI slots.

Rob
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,529
416
126
Expanding on Madwand1 above post.

Before you even look at you network run a DiskMark bench and make sure that the components that suppose to read and write Files can work at the Speed that you expect from the Network.

I.e. if you Hard Drive or DVD cannot deal with Read and write that is above 30MB or 40MB/sec. do not waste your time or money on the Network.

The general tendency among the crowd in the USA is to run and buy and in the process badmouth brand A, or B, or Both, or the whole world.

If an Alien from another planet will look at Planet Terra Technical forums he would think that ?WTF? is

Some kind of a mysterious technical term unbeknown in his own super advanced culture.:shocked: ;)

Every one pretends that this phenomenon is Technology. In reality it is a social emotional trend and has very little to do with technology.

In most cases it is BS. I get a transfer of 30MB/sec. from the Realtek o board Giga NICs. I have few PCI Giga NIC too and beside some minute difference there is No reason to buy ?this or that?, before you buy take the preliminary step to really get the best of what you have.

That said it is a different story when dealing with large business networks based on real Server OS and multiple WAN systems.

But the issue at hand here is a Peer-to-peer Home Network.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Yeah, something isn't right if you're getting those speeds. These days gig ethernet is faster than what consumer level hardware can push. So it's not the network.

If I were to take a guess in order...

1) Cabling, although you say you're using store bought cat6. iperf can help rule this out.
2) Possible slow disk to nic path - likely. iperf can help rule this out
3) Drivers/interrupt problems/bus - likely.
4) Poor SMB implementation/bugs - VERY likely, use a different protocol like FTP. You could have "ping-pong" going on where you're sending a lot of acks/requests but not actually moving data.
5) TCP stack isn't correct - likely only if you're been messing with it, otherwise should be fine.

It all boils down to your software/drivers I think. Don't know what "unRaid" is. The switch shouldn't care nor influence your throughput.

-edit-
A quick google search tells me what unRaid is. That's your problem.
 

trueimage

Senior member
Nov 14, 2000
971
0
0
Originally posted by: spidey07
A quick google search tells me what unRaid is. That's your problem.

I don't think that is fair. Other users are getting double-triple my speeds.

GigE at 1500 MTU should be pushing what, real world average?
 

Fardringle

Diamond Member
Oct 23, 2000
9,200
765
126
If you are using any IDE drives (particularly older, slow ones) in your unRaid setup, then Spidey's assessment is probably correct.

As others have suggested, use iperf to test your network performance without using the hard drives on your PCs or Linux box at all. If you get good speeds in iperf then the problem is almost certainly caused by the (lack of) speed of the drives in your NAS box. If your results are slow in iperf then you have a problem with your network configuration/hardware and you need to follow the troubleshooting steps that Spidey listed.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
I agree with drive testing, particularly write performance testing on the destination drive. I don't know the detailed characteristics of unRAID, but parity RAID can certainly have issues with write performance, and as unRAID borrows from RAID...

dd can be used for write performance testing, creating a file with random contents, to factor out the source file read and network transfer.

Here's an example showing a hit from parity in unRAID:

http://www.smallnetbuilder.com.../performance_write.jpg

Standard caveats apply -- just because someone else's setup showed such performance characteristics doesn't mean that your different system will behave the same way, which is why we measure...