Is a gigabit network the answer?

Jwyatt

Golden Member
Mar 22, 2000
1,961
0
76
Hi folks.
A customer of mine has runs an Insurance office from a small town in Virginia. We have 3 computers setup as described:
Dell PIII 600 using 100mb. This computer acts as a server, and as a work station.
Dell PIII 450 or 500? 100Mb. Work station
Dell Optiplex 900Mhz. 100Mb. Work station.

All these are desktop computers. The 600 has all the databases running from there. The other 2 computers access these using a simple shared drive to the folders needed.

Some of the databases for the 10 or so programs they are running are in the range of 40-80MB size. The problem is when either of the 2 workstations access the needed files theres a ~30 second or more delay opening them. While this doens bother me it does bother them.
The main computer whcih is acting as a server is usually being used, but for the most part its rare that both workstations are used at the same time. It might happen sometimes, but my brother is the owner and I know he's not doing any work ;)

All the network runs through a linksys 100bm switch. Theres also a printer networked in there as well.

I would have put a server in for them months ago, but theres a couple software packages they must run that will NOT run on win 2k or NT servers. I could possible build a server and just use simple file sharing, but im trying to figure out the best bang for the buck.

Will gigabit help them much? I havnt used it yet so my opinion doesnt weight much. Would a server be best to cure these blues? I dont favor a server at this point because I feel the bottle-neck is the network right now.

Any help is appreciated.

Thanks,

Jerry





 

nightowl

Golden Member
Oct 12, 2000
1,935
0
0
Gigabit is most likely not the one and only solution for this situation. First, if you move to gigabit ethernet you will run into other limiting factors such as the hard drive. Also, it is probably the hard drive that is limiting you in this sitation. A faster hard drive would probably help the situation at hand. Lastly, Windows 9x networking implementation is not the best. I have seen in the past where a Win Me machine was barely pushing out 10Mbps on a 100Mbps connection. The same computer with XP installed now does 70+Mbps on the same network.
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
Pulling very large files over the network constantly is just going to be bad for performance. A gigabit network will help, but not by a factor of ten and it will not fundamentally change the problem. (so maybe it takes 15 seconds instead of 30...)

This sounds like a job for terminal services, except you say that the applications won't run on 2K/XP? Are you absolutely sure that it's impossible, including with M$'s compatibility software pack? If very sure, then a solution approach might be a server (dedicated) running 98 instances under VMware and VNC or X to remote display. But that will be a tad klugy.

You say databases. Is there a way to get this to be client-server, like SQL?

Another option, possibly very limited on Windows, is if you could find a more advanced network filesystem that supported local caching and partial updates. For example, maybe Coda. Windows is notoriously only-what-MS-gives-you about filesystems, so you might not have this option, but maybe something could be built. (e.g., a script that locks the file on the server, does an rsync to the client, runs the program, and when it exits, rsyncs the file back to the server and removes the lock)
 

amdskip

Lifer
Jan 6, 2001
22,530
13
81
I would build another computer to act as the server. How much ram does the server currently have right now? What is the hard drive in it? IMHO, the hard drive and the ram seem to be the bottleneck in this situation. Gigabit is not really a solution.
 

randal

Golden Member
Jun 3, 2001
1,890
0
71
A good solution is to load up the server with ram and on boot, make a great big ramdrive ... on boot, copy the databases to ram. Then, using windows task scheduler, every 5-10 minutes copy the dbs back to the HD. It's not a good idea from a stability/crach-recovery standpoint, but you can sort of mitigate that by using incrementing file names and storing them on separate parts of the HD. Performance will be really up there, as it's as fast as the machine can copy stuff from across the PCI Bus and out the NIC.

We've done this before when we were a tiny company and had a database that everybody was working with (nothing was distributed at the time) ... loaded it up w/ 512mb of ram, made a 384MB ram drive, copied everything to that and shared it ... synched everything back once every 15 minutes ... like I said, it was really quick. We had it go down once, so everybody lost about 10 minutes of work -- no biggie.

edit : if you have 100mb NICs, they should be able to get up around 9MB/sec ... so loading the 80MB database, if going full blast, could take as little as 9 seconds ... you'll be hard pressed to beat that with cheapo commodity hardware.

$.02
randal
 

Jwyatt

Golden Member
Mar 22, 2000
1,961
0
76
Wow. Im so glad I didnt have them run out and spend 500-1000 on network gear to find its no faster than before!

Thanks for the advice guys. I'll definetly use it.

 

Jwyatt

Golden Member
Mar 22, 2000
1,961
0
76
Originally posted by: cmetz
Pulling very large files over the network constantly is just going to be bad for performance. A gigabit network will help, but not by a factor of ten and it will not fundamentally change the problem. (so maybe it takes 15 seconds instead of 30...)

This sounds like a job for terminal services, except you say that the applications won't run on 2K/XP? Are you absolutely sure that it's impossible, including with M$'s compatibility software pack? If very sure, then a solution approach might be a server (dedicated) running 98 instances under VMware and VNC or X to remote display. But that will be a tad klugy.

You say databases. Is there a way to get this to be client-server, like SQL?

Another option, possibly very limited on Windows, is if you could find a more advanced network filesystem that supported local caching and partial updates. For example, maybe Coda. Windows is notoriously only-what-MS-gives-you about filesystems, so you might not have this option, but maybe something could be built. (e.g., a script that locks the file on the server, does an rsync to the client, runs the program, and when it exits, rsyncs the file back to the server and removes the lock)

Im not keen on databases yet. Is there a way to tell if i can get it to be client-server based? Should i call the software providers?

amdskip said
would build another computer to act as the server. How much ram does the server currently have right now? What is the hard drive in it? IMHO, the hard drive and the ram seem to be the bottleneck in this situation. Gigabit is not really a solution.

The server/workstation i think is using 128mb ram. I believe its a 5400rpm drive as well.

One thing to note. I just installed the 900MHZ the other day. Picked it up cheap and replaced the 166Mhz they were using. I think it has a 7200RPM drive, and i know it has 256mb ram. I could upgrade that ram to 512 and use it as the server.

I think what i will do is take one of my comps and load win2k server on it with a raid system. Take it up there and run some tests to see how much faster it is, and see if i can get this software to actually run on 2k.

Question. Which server software should i try? Theres several different ones out there. SHould i put them on a domain? I downloaded the new 2003 server and it looks good, but seems to have alot there they really dont need. I would hate to install it and it go down on them 12 months later as well!

Any help is appreciated again.

Thanks for the replies so far.

 

Oaf357

Senior member
Sep 2, 2001
956
0
0
What you need is RAM and hard drive with a decent (1 GHz should be good enough) proc.

Make that a dedicated server (Windows 2000 Server or Linux even) and you should be okay. If the network is switched and not hubbed (forgive me I've just scanned over the entries) your users should be much happier. Remember, when using Windows if a service isn't needed, disable it.
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
I've seen 9x kill network performance also. I don't think a faster processor will help you a lot. It sounds like you're entirely network or disk bound. I would guess network bound. A 60MB file when transferred over a network that's only giving you 30mbps will take 15seconds. You'd get much better performance with a client/server system but it sounds like that's just not an option with the software you're running. I wouldn't rule out getting a gigabit network becuase it actually not that expensive solution. Other solutions may require a lot of time to setup or require expensive MS server licenses. You might also consider a FireWire network. That would give you 400mbps and be the cheapest solution. Such a network would only work over short distances of course. What are these programs that you can't run on 2K/XP? I don't understand your situation because if you're only using the server by simple file sharing, you shouldn't have to run any programs at all on it. Or do you mean that the server needs to run some 2k/xp incompatible software while it serving its role as a workstation?
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
One more thing. Get and run NetIQ Qcheck so you can benchmark your network. That way you'll know for sure what's going on.
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
>Should i call the software providers?

YES! Several things about your description of how things work here (for example, not running under 2K/XP) sound kinda odd. And the whole pulling the database file over the network to clients thing isn't a happy place. It would not surprise me if you could call up the vendor and they'll be happy to help you fix the problem. Of course, they'll probably try to sell you the latest version to do so. But at least you can learn more about what could be done.

zephyrprime, I disagree that he's network bound. ~80MB file in ~30s? 80*8=640Mb/30s = 21.3Mb/s. Even allowing for overhead, there's plenty of capacity at 100Mb/s going unused.

Disk... 80MB/30s = 2.66MB/s. I don't know what kind of disks he's got, but a 5400RPM EIDE of the era related to the processors in those boxes should be able to sustain ca. 20MB/s easily. Again, plenty of headroom.

Getting some diagnostic tools like the one zephyrprime suggests would be a good idea. Why speculate when you can measure? I'm sure there are free disk/net utilization tools floating around.

There are three things I'd blame for the bad file share/copy performance:
1. Slow server. Win9x as a file server? Yeah, it does work, but...
2. Slow client. Win9x isn't much better as a client.
3. The SMB file sharing protocol really sucks, and never gives excellent performance.

So, for example, if the poster were to set up a dedicated Linux Samba file server, I think things would be sped up. And if the client machines were running a newer version of Windows, I think things would be sped up. I believe at its core this is a software problem, not a hardware problem. Throwing better hardware at the problem -- a faster disk, a faster network -- might help a little. But it's not going to help a lot, IMO.

zephyrprime, FireWire has multiple speeds, 100Mb/s and 400Mb/s being common. It turns out that a lot of adapters and drivers really run at 100Mb/s. Also, FireWire really isn't made to be a network technology - though it works, it's not as good as a real Ethernet adapter for moving IP packets. The poster almost definitely would be better off with GigE than FireWire.

If the original poster did want to do GigE, Intel Pro/1000MT PCI boards can be had for $41 each from Newegg, and at three hosts you could just get four adapters, put two in the server, and one in each client, and run it point-to-point. While a kluge, this would allow you to not have to buy a GigE switch, which are still a bit expensive.
 

ivwshane

Lifer
May 15, 2000
33,387
16,787
136
Couldn't you just put two nics in the server and run them independently to each of the machines?
And you should have more ram as well. Those two upgrades could be done for under $100!

Problem solved;)
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
ivwshane, yes you can, and at this size it might be by far the most cost-effective solution.
 

Oaf357

Senior member
Sep 2, 2001
956
0
0
Originally posted by: ivwshane
Couldn't you just put two nics in the server and run them independently to each of the machines?
And you should have more ram as well. Those two upgrades could be done for under $100!

Problem solved;)

Not a bad idea until that person has to restart or the hard drive crashes. Then everything is FUBARed. But definitely a good idea if you're looking for cost effectiveness.
 

Jwyatt

Golden Member
Mar 22, 2000
1,961
0
76
Cost is a consearn, but not he most important factor. If I could get the systems running faster $1000 isnt much.

I dont feel 2 nics will help much. It isnt often all three workstations are accessing anything at once. usually its the work station/desktop and another work station accessing things.

I would love to be able to install a linux or bsd fileserver, but with my limited experience with linux and zero experience with bsd I feel it would be more of a bother than help any if problems arose. The office is 75 miles from me, and I usually cant drop everything and run to them when needed.

Im going to install linux on a spare comp i have here tonight. I might try some others.

Any suggestions on what fileserver software would be the most user friendly or easiest to learn?

Thanks again for all of your comments.

Jerry
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
Jwyatt, Red Hat Linux is probably your best bet, for the practical reasons that it's easily available (free download, or buy it in B&M stores), and you can buy it with a decent manual and tech support, or buy third-party books on how to use it. Random Linux how-to books are more often than not really written for Red Hat.

As for the support, you can put a modem on the server and get in from remote in a pinch, or you can log in over the 'net if they have an Internet connection. That's the nice thing about UNIX ;)

There is a learning curve, but it'll serve you well in your travels to put in some time and learn it. Red Hat has done a pretty decent job of packaging things up and making them more gooey and approachable for people converting over from Windows. Do a Red Hat install, select server, make sure to deselect all the stuff like mail/news/SQL that you aren't using, and you should be in okay shape, I think (?) there's a GUI tool to configure Samba, the SMB server.

I would definitely suggest you work it all out at home FIRST.

The two gigabit NIC suggestion was basically a cost optimization to avoid a gig switch. D-Link has an 8-port gig switch for about $300. Intel Pro/1000MT boards are $41 from Newegg. You'd have to buy a NIC per system anyway to connect everything gigabit, so the idea was that you could avoid the cost of a switch by putting more NICs into the server and directly connecting everything to that box using crossover cables (actually, 1000BaseT is auto-crossover, so a plain cable will work too). The right way to do it is to use a switch, though.

A possibly lower cost approach would be to get a switch with a bunch of 10/100 ports and one or two 10/100/1000 ports; there are many of those available. Run the gig port to the server, and the 100 ports to the clients. Figure the server is going to hit the network speed bottleneck before the clients, since it has to send out data for all the clients.

But still, I don't think the network is your main problem, as you're not exceeding the potential throughput of 100BaseTX.
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
zephyrprime, I disagree that he's network bound. ~80MB file in ~30s? 80*8=640Mb/30s = 21.3Mb/s.
I know that such bad performance would seem to be impossiblw but I've seem win9x's horrible network performance and the effect that crummy NICs can have and believe me, there are many setups that can get anywhere near the 100mbps promised by hardware. The network problem in 9x/me is a know problem and Microsoft only promised to fix it in the next version of NT (which was 4.0 or 2K - I can't remember.)

And I agree with you on Firewire not being a very good solution. I only mentioned it so every option would be on the table.
 

TW2002

Member
Jan 9, 2002
57
0
0
Getting mighty technical for a 3 node network with a couple of DB apps. :)

Definately in the app configuration. If these are MS access based applications....they are unfortunately like this. You may need to run a compair/compact on them (and do this regulary) Ive also found creating a mapped drive to the share hosting the DB files (and then connecting it at login) can overcome the delay. Its hard to say not knowing specifically what kind of application we are talking about. But with what you have described I doubt your network is saturated or the PC`s are bottlenecks. What is the latency for file transfers on the network? What is the local performance of the applications?
 

chsh1ca

Golden Member
Feb 17, 2003
1,179
0
0
Originally posted by: cmetz
>Should i call the software providers?
YES! Several things about your description of how things work here (for example, not running under 2K/XP) sound kinda odd.

Just as an aside, working for a company in the financial services business, I can vouch for the fact that they are one of the most backwards industries there is. It wouldn't surprise me if this software was required for their insurance business, and it also wouldn't shock me if it was something like a Foxpro 2.x app that needed mapped drives to function. :)

In this instance, you're likely dealing with legacy software, and if that's the case, then a faster processor et. al. shouldn't cause any problems. You could, from the sounds of the setup, install Win2k on the file server, since I'd bet it's ONLY serving files, and not an active piece of software running on the server. Then again, not knowing the specifics, I can't really say for sure either way.

Being in a position of supporting legacy software is time consuming and annoying to say the least.
 

Jwyatt

Golden Member
Mar 22, 2000
1,961
0
76
A few of the apps are older legacy software. Its funny when he first installed the software he was wondering where the mouse went! hehe

They are all text based, a couple of the newer companies such as Progressive and another smaller company have decent software, but Eirie(sp?) has some of the olders bass ackward software I have dealt with.

I downloaded bsd distro of unix last night. im going ot load it on a spare tonight or tomorow. I'll try red hat next on another machine, but until I find one that im comfy with maintaining i'll probably hold off on installing it.

All you help has been appreciated. My next thread might be.something of the sort " WTH is )(#%)$ in BSD" or "Why in the hell would anyone want to use this crap Red Hat!" But im open minded and offer any other OS a chance. God knows my bro isnt going to pay 2k for a operating system just yet.

Thanks again..


 

chsh1ca

Golden Member
Feb 17, 2003
1,179
0
0
Well, I'd recommend for server uses go with Slackware over RedHat. Don't install a gui either, it defeats the purpose of using a unix. :)

At any rate, I'm glad you understand you shouldn't be sticking something into production that you're not comfortable maintaining. Your clients are pretty lucky, I know someone who sold a client a copy of Win2K pro without first learning its ins and outs (It's just a stabler Windows 95, right?) and managed to mess up pretty badly, didn't update it regularly, and just made himself look like a fool. :)

Anyhow, if you need any help, I'd be willing to help you out. Email me at my profile's email address, or visit irc.loonyservices.com #general and ask there during the day, there's a couple of knowledgeable types that hang out there. :)