gunrunnerjohn
Golden Member
- Nov 2, 2002
- 1,360
- 0
- 0
That's what this entire thread is devoted to!
We're trying to determine how to wring max performance out of gigabit links...
Originally posted by: Kilrsat
A test changing the Receive/Transmit Descriptors @ 128, 256, and 512
Pulling 4.12GB folder from Server 1 to Client 1 and Client 2 at the same time -
Both at 512 on all 3 machines:
Peak: 58.4MB/s
Avg: 38MB/s
Both at 256 (default) on all 3 machines:
Peak: 64.1MB/s
Avg: 48MB/s
Both at 128 on all 3 machines:
Peak:60.7MB/s
Avg: 42MB/s
I did an informal test @ 2048 yesterday and the results were similar to 512 (I wasn't recording explicit numbers at that time).
Originally posted by: MikeDub83
Originally posted by: Kilrsat
A test changing the Receive/Transmit Descriptors @ 128, 256, and 512
Pulling 4.12GB folder from Server 1 to Client 1 and Client 2 at the same time -
Both at 512 on all 3 machines:
Peak: 58.4MB/s
Avg: 38MB/s
Both at 256 (default) on all 3 machines:
Peak: 64.1MB/s
Avg: 48MB/s
Both at 128 on all 3 machines:
Peak:60.7MB/s
Avg: 42MB/s
I did an informal test @ 2048 yesterday and the results were similar to 512 (I wasn't recording explicit numbers at that time).
Can you tell us a little about the hard disk setup on the client and server?
Originally posted by: Kilrsat
A test changing the Receive/Transmit Descriptors @ 128, 256, and 512
Pulling 4.12GB folder from Server 1 to Client 1 and Client 2 at the same time -
Both at 512 on all 3 machines:
Peak: 58.4MB/s
Avg: 38MB/s
Both at 256 (default) on all 3 machines:
Peak: 64.1MB/s
Avg: 48MB/s
Both at 128 on all 3 machines:
Peak:60.7MB/s
Avg: 42MB/s
I did an informal test @ 2048 yesterday and the results were similar to 512 (I wasn't recording explicit numbers at that time).
Originally posted by: MikeDub83
Originally posted by: Kilrsat
A test changing the Receive/Transmit Descriptors @ 128, 256, and 512
Pulling 4.12GB folder from Server 1 to Client 1 and Client 2 at the same time -
Both at 512 on all 3 machines:
Peak: 58.4MB/s
Avg: 38MB/s
Both at 256 (default) on all 3 machines:
Peak: 64.1MB/s
Avg: 48MB/s
Both at 128 on all 3 machines:
Peak:60.7MB/s
Avg: 42MB/s
I did an informal test @ 2048 yesterday and the results were similar to 512 (I wasn't recording explicit numbers at that time).
Okay, I wanted to establish the hard drives used in this test to make sure they are not the limiting factor:
Server-
4x36GB scsi drives in Raid-5
Client 1-
80GB ide harddrive
Client 2-
3x36GB scsi drives in Raid-0
As you can see, a non-issue.
One other question... You say you had an average of 48 MB/s. Does that mean 48 MB/s ouput of the server or input at each client?
Originally posted by: foshizzle
so has anyone tried a crossover vs hub/switch benchmark?
Originally posted by: Link19
Does Gigabit Ethernet give any performance improvement for online activities with a cable connection?
Limitations of the PCI bus is why I was testing machines with 64-bit 66mhz pci slots, or in the case of the 2 Xeon workstations they have the integrated gigabyte adapters on Intel's CSA bus.Originally posted by: Thoreau
Apologies if this has already been mentioned or replied to, but it seems to me at least that the PCI bus would be more of a limiting factor than the harddrive. Basically, let's say you have yourself a drive tht is capable of 45MB/sec rates in real world applications. You take that data and shove it through the PCI bus once to get it to memory, then back from memory, you shove it through again to get it to a PCI nic. I would expect that having more memory would not on it's own make the difference in performance, but help eliminate some of the PCi congestion from happening in the first place by having the data not cross the PCI bus twice.
On that note, I wonder how well a ramdisk setup would work for gig transfer speeds?
(please correct me if i'm off on the above theories, but they seem solid in my little mind. =) )
Edit: I'd wonder if that Maxtor MaxBoost software that was out in beta a few months back would be something that could help at all?