Gigabit Reviews & Benchmarks

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

gunrunnerjohn

Golden Member
Nov 2, 2002
1,360
0
0
That's what this entire thread is devoted to! :D We're trying to determine how to wring max performance out of gigabit links...
 

Kilrsat

Golden Member
Jul 16, 2001
1,072
0
0
A test changing the Receive/Transmit Descriptors @ 128, 256, and 512

Pulling 4.12GB folder from Server 1 to Client 1 and Client 2 at the same time -

Both at 512 on all 3 machines:
Peak: 58.4MB/s
Avg: 38MB/s

Both at 256 (default) on all 3 machines:
Peak: 64.1MB/s
Avg: 48MB/s

Both at 128 on all 3 machines:
Peak:60.7MB/s
Avg: 42MB/s

I did an informal test @ 2048 yesterday and the results were similar to 512 (I wasn't recording explicit numbers at that time).
 

gunrunnerjohn

Golden Member
Nov 2, 2002
1,360
0
0
I should be getting my MSDN subscription in a couple of weeks, I'll have several server versions to tinker with at that point. I think I'll bring up Server 2003 and see if it makes a significant difference in the benchmarks. :)
 

MikeDub83

Member
Apr 6, 2003
96
0
0
Originally posted by: Kilrsat
A test changing the Receive/Transmit Descriptors @ 128, 256, and 512

Pulling 4.12GB folder from Server 1 to Client 1 and Client 2 at the same time -

Both at 512 on all 3 machines:
Peak: 58.4MB/s
Avg: 38MB/s

Both at 256 (default) on all 3 machines:
Peak: 64.1MB/s
Avg: 48MB/s

Both at 128 on all 3 machines:
Peak:60.7MB/s
Avg: 42MB/s

I did an informal test @ 2048 yesterday and the results were similar to 512 (I wasn't recording explicit numbers at that time).


Can you tell us a little about the hard disk setup on the client and server?
 

Kilrsat

Golden Member
Jul 16, 2001
1,072
0
0
Originally posted by: MikeDub83
Originally posted by: Kilrsat
A test changing the Receive/Transmit Descriptors @ 128, 256, and 512

Pulling 4.12GB folder from Server 1 to Client 1 and Client 2 at the same time -

Both at 512 on all 3 machines:
Peak: 58.4MB/s
Avg: 38MB/s

Both at 256 (default) on all 3 machines:
Peak: 64.1MB/s
Avg: 48MB/s

Both at 128 on all 3 machines:
Peak:60.7MB/s
Avg: 42MB/s

I did an informal test @ 2048 yesterday and the results were similar to 512 (I wasn't recording explicit numbers at that time).


Can you tell us a little about the hard disk setup on the client and server?

Go back about 10 posts. But I'll list it all here again.

Server 1:
P3-1000
1GB pc133
Intel Pro/1000 MT server adapter in a 64bit 66mhz pci slot.
4x36GB scsi drives in Raid-5
Windows Server 2003

Client 1:
P4-2.8Ghz
2GB pc2100
Integrated Intel gigabit adapter
80GB ide harddrive
Windows XP Pro

Client 2:
Dual Xeon 2.8Ghz
2GB pc3200
Integrated Intel gigabit adapter
3x36GB scsi drives in Raid-0
Windows XP Pro

Client 3:
Dual Xeon 1.4Ghz
2GB pc133
Onboard Broadcom gigabit adapter
4x72GB scsi drives in Raid-5
Windows Server 2000

Client 4:
Dual Xeon 2.8Ghz
1GB pc3200
Integrated Intel gigabit adapter
80GB IDE harddrive
Windows XP Pro

Client 5:
P3-500
384mb pc100
Intel Pro/1000 MT Server Adapter (in 32bit pci slot)
10GB ide harddrive
Windows 2000
 

MikeDub83

Member
Apr 6, 2003
96
0
0
Originally posted by: Kilrsat
A test changing the Receive/Transmit Descriptors @ 128, 256, and 512

Pulling 4.12GB folder from Server 1 to Client 1 and Client 2 at the same time -

Both at 512 on all 3 machines:
Peak: 58.4MB/s
Avg: 38MB/s

Both at 256 (default) on all 3 machines:
Peak: 64.1MB/s
Avg: 48MB/s

Both at 128 on all 3 machines:
Peak:60.7MB/s
Avg: 42MB/s

I did an informal test @ 2048 yesterday and the results were similar to 512 (I wasn't recording explicit numbers at that time).



Okay, I wanted to establish the hard drives used in this test to make sure they are not the limiting factor:

Server-
4x36GB scsi drives in Raid-5
Client 1-
80GB ide harddrive
Client 2-
3x36GB scsi drives in Raid-0

As you can see, a non-issue.

One other question... You say you had an average of 48 MB/s. Does that mean 48 MB/s ouput of the server or input at each client?
 

Kilrsat

Golden Member
Jul 16, 2001
1,072
0
0
Originally posted by: MikeDub83
Originally posted by: Kilrsat
A test changing the Receive/Transmit Descriptors @ 128, 256, and 512

Pulling 4.12GB folder from Server 1 to Client 1 and Client 2 at the same time -

Both at 512 on all 3 machines:
Peak: 58.4MB/s
Avg: 38MB/s

Both at 256 (default) on all 3 machines:
Peak: 64.1MB/s
Avg: 48MB/s

Both at 128 on all 3 machines:
Peak:60.7MB/s
Avg: 42MB/s

I did an informal test @ 2048 yesterday and the results were similar to 512 (I wasn't recording explicit numbers at that time).



Okay, I wanted to establish the hard drives used in this test to make sure they are not the limiting factor:

Server-
4x36GB scsi drives in Raid-5
Client 1-
80GB ide harddrive
Client 2-
3x36GB scsi drives in Raid-0

As you can see, a non-issue.

One other question... You say you had an average of 48 MB/s. Does that mean 48 MB/s ouput of the server or input at each client?

That's 48MB/s leaving the server.
 

MikeDub83

Member
Apr 6, 2003
96
0
0
For those interested in seeing my benchmark comparing 10 Mbps hub vs 100 Mbps switch vs 1 Gbps switch, I have completed 2/3 of it. I'm waiting for at least another paycheck for the gigabit gear.

Something very unexpected happened while testing the 10Mbps hub- Windows File Sharing (SMB) is faster than FTP.

--------- 10 Mbps Hub--------------
FTP Average Speed: 999.4 KB/s
SMB Average Speed: 1.05 MB/s

EDIT: The difference of about 1 KB/s might seem insignificant. However, what makes it so interesting is that FTP is 25% faster than SMB on a 100 Mbps switch.
 

andy0310

Junior Member
Feb 24, 2004
1
0
0
Originally posted by: foshizzle
so has anyone tried a crossover vs hub/switch benchmark?

I have been using a cat-5 crossover cable between AMD 1600+ and my old celeron 366 with intel pro 1000MT desktop adaptors. Sandra test shows 32MB/s. Both are on 33MHz/33bit PCI. I have done another test with a Dell Poweredge 2600 server and a Dell precision 650 workstation with intel pro 1000MT server adaptors. Both are on 64bit/66MHz PCI. I was getting low 60MB/s. Link to my test result.

I just got a Dell 2616 16 port gigabit. Tried yesterday. I was only getting 25-26MB/s between the same AMD and celery with Sandra. I will spend some time to test with 64bit/66Mhz PCI later.

Found an interesting article here.

Another roundup.
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,539
418
126
Welcome to the Network Forum Andy. I must say you are one of the Very few that starting here with a Contribution rather than a Complain. Keep it up. :D

I never tried Crossover connection. With a switch I can get similar results. Unfortunately cross over solution is not very practical when you have a Network.

It seems to me at this point that the Switch issue has be further investigated.

 

Fiveohhh

Diamond Member
Jan 18, 2002
3,776
0
0
anyone have any results with some linux boxes? I'll be able to get some numbers in a week or so between debian and xp pro. Will get some specs/numbers up when I get my switch.
 

Haden

Senior member
Nov 21, 2001
578
0
0
Host A: AMD 1800+/768 DDR266/SMC 9452TX V2.0/Linux 2.4.20
Host B: AMD 2200+/512 DDR266/3C940 integrated (A7V600)/Linux 2.6.3
(crossover)

Netperf from A to B:
Throughput = 37.18 MBytes /s
Utilization A = 98.09%
Utilization B = 18.01%

Netperf from B to A:
Throughput = 32.25 MBytes /s
Utilization A = 27.50%
Utilization B = 11.10%

However, I can't draw any conclusion about NICs, hosts differ too much (even kernel can have impact)
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
Fiveohhh, I've been doing a lot of gigabit testing in my lab for a project. I can give you an executive summary:

Linux kernel 2.6.4 performs better than 2.4.25 by a small bit.

The Intel 82557EI CSA LOM gigE is the best performing interface, but you have to have it on the motherboard. The Netgear GA302T / BCM5701 gigE is the best performing PCI NIC, with the Pro/1000MT closely following. The RTL8169 causes system lock-ups when you try to drive it hard, I hope this is a driver problem.

P4 800MHz FSB chip + 875P + dual PC3200 is the best performing CPU/chipset configuration. In all cases so far, the P4 platform beats the Athlon platform solidly in I/O performance. Don't know whether this is CPU, FSB, or chipsets, probably a little of all. I have not tested the A64 yet, I think it's not yet a mature platform anyway and so any tests would be unfavorable.

Best sustained result I have is 956Mb/s UDP, 891Mb/s TCP. Sustained netperf application to application. 82557EI CSA P4 2.8 -> 82540EM PCI P4 2.4, kernel 2.6.4, MTU=1500, Hawking 4-port switch in between. For reference, gigabit Ethernet actually allows +/- 10% on the line clock, so moving >900Mb/s is technically considered "line rate" for gigE. Though I want to see more like 990Mb/s before I call it line rate ;)
 

Fiveohhh

Diamond Member
Jan 18, 2002
3,776
0
0
Thanks for the info what are you using to measure the throughput? My switch will be here in a week so hopefully I can get some numbers up after that.
 

Devistater

Diamond Member
Sep 9, 2001
3,180
0
0
Originally posted by: Link19
Does Gigabit Ethernet give any performance improvement for online activities with a cable connection?

No. Being as 99.99% of all cable/dsl connections work at under 6mbit speeds, which do not strain the 10 or even the 100 networks, you will see absolutely no differance online. Even if it responds a tiny bit faster in latency rather than throughput, say half a millisecond due to better quality hardware, you wouldn't even notice that online.

Not only that, but 99.99% of all cable/dsl connections are have only 10 speed network hardware in them to save money. So it wouldn't do anything anyway throughput wise.

If your ONLY concern is online activities with cable or dsl and you never ever transfer files locally across a network/LAN, a 10 speed network is plenty. However, its actually cheaper nowadays to go with 10/100 stuff, and you might want to transfer files locally anyway so no point in doing 10 stuff anymore.

 

Thoreau

Golden Member
Jan 11, 2003
1,441
0
76
Apologies if this has already been mentioned or replied to, but it seems to me at least that the PCI bus would be more of a limiting factor than the harddrive. Basically, let's say you have yourself a drive tht is capable of 45MB/sec rates in real world applications. You take that data and shove it through the PCI bus once to get it to memory, then back from memory, you shove it through again to get it to a PCI nic. I would expect that having more memory would not on it's own make the difference in performance, but help eliminate some of the PCi congestion from happening in the first place by having the data not cross the PCI bus twice.

On that note, I wonder how well a ramdisk setup would work for gig transfer speeds?

(please correct me if i'm off on the above theories, but they seem solid in my little mind. =) )

Edit: I'd wonder if that Maxtor MaxBoost software that was out in beta a few months back would be something that could help at all?
 

Kilrsat

Golden Member
Jul 16, 2001
1,072
0
0
Originally posted by: Thoreau
Apologies if this has already been mentioned or replied to, but it seems to me at least that the PCI bus would be more of a limiting factor than the harddrive. Basically, let's say you have yourself a drive tht is capable of 45MB/sec rates in real world applications. You take that data and shove it through the PCI bus once to get it to memory, then back from memory, you shove it through again to get it to a PCI nic. I would expect that having more memory would not on it's own make the difference in performance, but help eliminate some of the PCi congestion from happening in the first place by having the data not cross the PCI bus twice.

On that note, I wonder how well a ramdisk setup would work for gig transfer speeds?

(please correct me if i'm off on the above theories, but they seem solid in my little mind. =) )

Edit: I'd wonder if that Maxtor MaxBoost software that was out in beta a few months back would be something that could help at all?
Limitations of the PCI bus is why I was testing machines with 64-bit 66mhz pci slots, or in the case of the 2 Xeon workstations they have the integrated gigabyte adapters on Intel's CSA bus.

Even from a CSA bus to CSA bus connected adapter, and both having SCSI Raid-0 drives, the SMB transfers were still only 55MB/s.
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
New to the network forum I just wanted to say that my experiences are similar to most of yours:

Server:
Linux 2.6.3 kernel (Mandrake 10 Community, I never bothered to upgrade to the full release version)
Intel 32 bit 66MHz GbE card
4x Seagate 7200.7 drives on 3Ware 64 bit/66MHz in hardware RAID 5 config
Only driver options used: RX/TX buffers both maxed

Client (my primary computer):
Win 2000 Pro
Realtek GbE built into ASUS A7N8x-E
1x Seagate 7200.7
only driver options used: RX/TX buffers both maxed

Server computer has shown sustained read speeds with benchmarking of over 130 MB/sec. Write speeds are considerably lower in the hardware RAID 5 config, 30 MB/sec range. This seems limited by the 3Ware card ability to calculate parity as in software RAID 5 write speeds of 80+ MB/sec are possible. I'm mostly using the array to read from, so I chose the low CPU usage of hardware RAID 5 vs. the high write speed and high CPU usage of software RAID 5.

Samba transfers by dragging and dropping on the windows box are around 25-30 MB/sec (using ~3.5GB zip file)
FTP transfers from server to client are 35-40 MB/sec (using the same ~3.5GB zip file)

In my case, I tried the following:
Jumbo frames lowers CPU usage, but does not affect transfer rates.
Monitoring the network connection from the server side showed alternating spikes to 60-80 MB/sec and periods of no transfer.
Putting a server oriented GbE card in the client didn't change transfer rates
Transfer rates the same with or without a switch. Also tried cat5, cat5E, cat6, crossover, straight through, store bought, and hand made no difference with any of the cables. (for those who don't know GbE spec requires auto-negotiation, so crossover cables are never required, but can be used anywhere if you want.)

I think it's pretty clear in this case that my client write speed is limiting performance. I was going to set up a client side RAM drive to see what was possible, and try it with my backup computer using a Seagate Cheetah 73GB 15k drive, but got bored with trying network stuff, and caught up with moving into my new house and am currently installing network/phone/cable ports in every room.