Gigabit Feasibility

gutharius

Golden Member
May 26, 2004
1,965
0
0
If I build a 32bit / 33mhz bus system with a Ultra 320 Scsi Drive, Athlon Mobile 2600+, and Crucial memory would I be able to get anywhere near or past 750mbps with a gigabit nic? I have seen reviews that pretty much show you need a 64bit / 66mhz system to accomplish this with a switch that supports adjustable mtu sizes. Basically would having a Ultra 320 drive, with the system above, solve the problem of the hard drive not being able to spit the data out fast enough?
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
Doing what? What software? What chipset? What gigabit NIC? What drive? What's the other end?

If you're talking about Windows SMB/CIFS file sharing, you might get over 100Mb/s, but likely not over 200Mb/s. A better OS + protocol + filesystem can do better. Given that good gigabit NICs (Intel Pro/1000MT) are about $35, gigabit is still a good upgrade over 100Mb/s (you're spending all that money, and for a small amount more you can get more network performance).
 

nightowl

Golden Member
Oct 12, 2000
1,935
0
0
If I remember correctly the maximum that even the high end servers can push out is about 750Mbps (I think I read this in Network World). This is on high end hardware where there are no limitiations on the PCI bus or via hard drives. Also, even with an Ultra 320 drive the most you will be able to push with one drive is about 600Mbps. This is assuming at sustanined transfer rate of over 70MBps.
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
NO.

32 bit / 33 MHz is going to be limited to about 50 - 60 MB/sec maximum.

Every piece of data needs to travel through the 133 MB/sec PCI bus twice, once for the network card and once for the hard drive. Best you can hope for is half of that after overhead, or around 50-55 MB/sec.

It simply cannot be done until you move either the hard drive or the network controller off the PCI bus, or put them on a bus with larger throughput.

Even then, single drive throughput, even the fastest SCSI drive isn't fast enough to always write at those speeds. If you look at spec sheets of 15k RPM drives, you'll see that MEDIA transfer rates are around 600 - 900 mbit per second. These are not counting overhead, and are likely READ speeds only. Typically write speeds are slightly slower. It could only maintain 750 over less than half of the drive.

To sustain 750 you need at least two drives in a RAID array AND a bus larger than 32 bit / 33 MHz.

THEN, and only then will you have a CHANCE at high throughputs... after you tackle any OS issues with trasferring data that quickly.
 

gutharius

Golden Member
May 26, 2004
1,965
0
0
So:

My samba server would need to be linux. I am cool with that.
How many SATA drives would I need in a RAID array and what type of array would provide the best data pumping power?
I see I will need a 66mhz / 64bit bus. Looks good.
Would a mobo supporting PCI-express be a better choice from a data throughput capacity standpoint? i.e. Would it let me get maximum output and input?
I have seen specs on 32bit/33mhz mobos where the Gigabit Enet interface and controller chip are wired directly to the southbridge thus avoiding the pci bus.
What are your thoughts on this?
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
Yes, getting the gigabit controller off the PCI bus is a solution. This is the case with some Intel and nForce3 motherboards, as well as most PCI-e motherboards. Then 32bit/33MHz can mostly be dedicated to the storage subsystem.

RAID 0 would provide the most raw throughput, but you lose reliability big time. I use a RAID 5 array in a 64bit/66MHz arrangement, and I still have only gotten ~45 MB/sec sustained through the gigabit, presumably because of my CLIENT hard drive write speeds. My CPU usage was pretty high as well, you may need to ensure the entire network is running on jumbo frames to increase efficiency and keep the CPU usage down.

FYI get 45 MB/sec when I use FTP to transfer. Using samba and windows drag and drop bumps me down 10 MB/sec to closer to 35MB/sec. This is measured by timing the transfer of a 3.8 GB compressed file. Rather than OS issues, I should have said transfer protocol issues.
 

gutharius

Golden Member
May 26, 2004
1,965
0
0
Hmm... I wonder if a TCP/IP based File Server would do better as far as transfer rate?

I thought RAID allowed you to combine the total data output and input so you can increase the data throughput. Doesn't mirroring accomplish this by allowing the hard drives to spit out the data as one?
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
gutharius, consider the Intel 875 + 82541GI CSA solution with some flavor of hardware RAID. For example, ABit IC7-G and dual 73GB Raptors in RAID 0. Get an 800MHz FSB P4 and dual channel memory kit, 512MB is fine. In theory, the new Intel 925 would be even better at this, but it sounds like that chipset has problems right now.

NForce3-250Gb claims to do the same thing on an A64, and there are claims of a NForce2 spin with the same capabilities. I'm really skeptical about NVidia and networking, though. At the very least, I would never want to be the early adopter of an NVidia product - give them 6+ months to bake the drivers.

Samba is pretty good, Linux is good, but the SMB protocol and the Windows client side is not so good. Keep that in mind when building.

For amusement, you might consider downloading Services for UNIX 3.5 and trying both NFSv3 and SMB between your clients and servers - if both will functionally do all you need, go with whatever solution gives you best performance. I believe that the Linux kernel-land NFSv3 server will give much better performance on that end, but then, Windows clients are optimized for SMB... so it's an interesting experiment for someone to go do.
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
RAID on one end doesn't matter, to sustain 750 mbit, you need RAID on BOTH ends of the transfer. You can't deliver 750mb if the client cannot receive it. Large square peg in a small round hole.

RAID 0 of two drives should be adequate from a sustained throughput standpoint. RAID 0 is not mirrored, it's striped. RAID 1 is mirrored, and isn't faster than single drive.
Check here for information on different RAID levels:
http://www.integratedsolutions.org/raid_ov.htm

As far as solving OS/protocol issues, I'm out of my league there.
 

gutharius

Golden Member
May 26, 2004
1,965
0
0
Thanks for all your help guys! I really appreciate it all. I am begining to think the invention of Gigabit was like inventing the cart before you have invented the horse. But I will trudge on and see what happens.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Originally posted by: Concillian
RAID on one end doesn't matter, to sustain 750 mbit, you need RAID on BOTH ends of the transfer. You can't deliver 750mb if the client cannot receive it. Large square peg in a small round hole.

RAID 0 of two drives should be adequate from a sustained throughput standpoint. RAID 0 is not mirrored, it's striped. RAID 1 is mirrored, and isn't faster than single drive.
Check here for information on different RAID levels:
http://www.integratedsolutions.org/raid_ov.htm

As far as solving OS/protocol issues, I'm out of my league there.

well actuallys its best to keep all of this stuff in memory if you can so you eliminate the hard drive bottleneck. smart controllers with loads of memory are pretty good at this and pre-fetching the data.
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
Yeah, of course. But in reality you'll find it's rarely in memory... especially on the client side... what's it going to do? Pre write the data it thinks it's going to need to write?

Realistically, how many client controllers have a significant amount of cache?

On the 'cart before the horse' comment, I wouldn't think of it that way, but more in the way that drive transfer protocols are made. Making sure the expensive hardware isn't subject to a bottleneck. IDE controllers were 66 MB/sec well before drives could even do 33MB/sec. Why? Because the cheap part of the system shouldn't limit the expensive part. You want the expensive part to work to it's fullest.

In the case of gigabit, at this point it's a cheap upgrade from 100baseT. Cards are like $10-15 more and switches are less than $100 more.

So for a nominal investment you can get 3-5x the transfer rate of 100 base T. You can't MAX OUT gigabit at this time, but that's the way infrasturcture should work, avoid traffic jams by growing the roads at a faster rate than the cars on it.

Most pieces of infrastructure in the PC has a hard time being used to the max:
AGP
USB2.0
SATA
Parallel UATA
PCI to some extent, but it's limiting in some cases, but not in the majority of cases.

That's the way it should be. The advent of Gigabit over Cat5e is the paradigm shift for networking. From this point forwards, we'll see the nwtwork as a non throughput limiting device for regular users, where before, the network was limiting and potentially choking off the more expensive storage system.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Originally posted by: Concillian
Yeah, of course. But in reality you'll find it's rarely in memory... especially on the client side... what's it going to do? Pre write the data it thinks it's going to need to write?

Realistically, how many client controllers have a significant amount of cache?

On the 'cart before the horse' comment, I wouldn't think of it that way, but more in the way that drive transfer protocols are made. Making sure the expensive hardware isn't subject to a bottleneck. IDE controllers were 66 MB/sec well before drives could even do 33MB/sec. Why? Because the cheap part of the system shouldn't limit the expensive part. You want the expensive part to work to it's fullest.

In the case of gigabit, at this point it's a cheap upgrade from 100baseT. Cards are like $10-15 more and switches are less than $100 more.

So for a nominal investment you can get 3-5x the transfer rate of 100 base T. You can't MAX OUT gigabit at this time, but that's the way infrasturcture should work, avoid traffic jams by growing the roads at a faster rate than the cars on it.

Most pieces of infrastructure in the PC has a hard time being used to the max:
AGP
USB2.0
SATA
Parallel UATA
PCI to some extent, but it's limiting in some cases, but not in the majority of cases.

That's the way it should be. The advent of Gigabit over Cat5e is the paradigm shift for networking. From this point forwards, we'll see the nwtwork as a non throughput limiting device for regular users, where before, the network was limiting and potentially choking off the more expensive storage system.

and these jumps always happen.

It was the same with 100 Base-T. The network was faster than the hosts.

then for awhile ram was the bottle neck, then storage, then bus, then proc, then network, and so on.

So the only real bottleneck nowadays should be processor/disk IO