copying files over network problems (advanced)

TSCrv

Senior member
Jul 11, 2005
568
0
0
heres the problem: while copying files over the network from my pc to my server, the server hangs big time.

network: gigabit lan, intel gigabit adapter in the server, onboard marvell adapter in my machine. using a netgear 5port switch. all ips are set up static by me becasue my server refuses to cooperate with me setting up dhcp (static ips have been up and remained the same for about 2 years, so thats not the problem)

my pc has its hard drives in hardware raid-0 config (2 drives)... my server has an OS drive, and a software (through windows) raid-5 array (5 drives, 4 of which are scsi, 1 is ide) the server has 512 mb of ram

my pc is windows 2000, the server is win 2000 server ed.

transfering a folder contain backups of my stuff from my pc to the server goes just fine, files range from 1kb to about 50mb in size, the whole folder about 2-4 gbs

the problem arises when i try to transfer isos.. 700mb transfer and it gets about halfway and comes to a dead halt. the servers hds are going nuts, and during this time the server cant come out of the screensaver, cant move the mouse, do ANYTHING. if i have taskmgr up on the performance page wherever the graph was when the filetransfer halted is where the graph will stay untill the transfer is canceled on my pcs end. a good description would be when the transfer halts, a screenshot of my server is taken and that is what you would see for the next 10 minutes.

this happens with any iso, the bigger the file the sooner it happens

i have zipped the isos on my pc with 200mb spanning and it can copy about 2 files and halt on the 3rd. so i cancel and try again

i try doing 1 file at a time, the 200mb file would transfer in a little more than 10 seconds to the server, and then the servers hard drives would be busy for the next 5 minutes

what i think: i think that i would copy the files over, and it would fill up the ram in the server, and stop accepting more data when the ram is full, which locks the server and halts the transfer... because of software raid-5 being one of the slowest raid comfigs out there, the data takes FOREVER to write from being stored in the ram.

why would copying over the network store all the data into the ram instead of directly writing it to the drive? i have no clue but i cant procede with what i am doing untill i get my data onto the server.

to answer some questions that might come up: i MUST run software raid-5 on my server for data redundancy, i have an old dell (1999) and the hard drives are not even worth their weight because they arnt reliable. i dont have the cash to run raid 0+1, and the hard drives fail too often to run raid-0, and making backups on dvds is out of the question (if i dont have cash to buy hard drives, i dont have cash to buy dvds every month)

also i have never had any problems with transfering stuff to the drives in the past, yes it would be slow because my array gets like 1.5mb per second write, but i want to know y it would cache the data instead of directly writing it, and the systems becomes completly unstable while writing...

any help? need to get past this so i can move on to what i actually need to do,, -thanks in advance
 

boomerang

Lifer
Jun 19, 2000
18,883
641
126
I shouldn't really comment because this is out of the realm of my experience, but the first thing that comes to my mind is AV software.

Screensavers have always caused me problems. I absolutely will not use them.

If nothing else, you'll at least get a bump out of this.
 

TSCrv

Senior member
Jul 11, 2005
568
0
0
no AV, norton REFUSES to install on server based platforms

i dont use screensavers, but the defualt screensaver be4 u login in (the ctrl+alt+del screen) cant be turned off...

heres a little more info, when the ram is filled or whatever, u try to move the mouse, it doesnt move, even the clock doesnt change time even after 5+ minutes.

bumps appriciated..
 

supagold

Member
Jun 21, 2005
60
0
0
Are you using Jumbo Frames on your gigabit network? If so, you need Jumbo Frame support on every device the frame touches. It's a long shot, but I thought I'd ask.
 

blemoine

Senior member
Jul 20, 2005
312
0
0
why do you have 4 scsi drives in and array with an ide drive. also if your drives are always crashing doesn't that sound like your problem right there. bad hard drives = constant headaches. i would atleast eliminate any and all "bad" drives from my system.
 

TSCrv

Senior member
Jul 11, 2005
568
0
0
jumbo frames? of all my networking work ive never heard of it,
could this be it?

ahh, i see my intel adapters support jumbo frames, and i dont feel like going out to buy even more gigabit cards

from reading i doubt thats the problem, i think i have to find a setting where i have to turn write cacheing off or something...
 

blemoine

Senior member
Jul 20, 2005
312
0
0
i hate to say this but Jumbo Frames are not the problem. the problem is those crappy hard drives. if you don't have money to get new hard drives. i would suggest selling your Gigabit Nic's and Switch. then take that money and buy 10/100 Nics and a switch and take the money left over and buy 2 decent hard drives and mirror them. this should solve the problem
 

gsaldivar

Diamond Member
Apr 30, 2001
8,691
1
81
Check your Event Viewer logs for any signs of hardware/software malfunction.

I've never seen someone RAID-5 mixed interface drives like that (SCSI + IDE)... definitely pick & stick with an interface across your entire RAID array. 512MB seems a bit low for a server that's expected to handle large network file transfers; consider increasing that, if possible. Make sure you're using all the latest driver versions, including those for data bus chipsets and your RAID controller chipset. It sounds like you've pieced these systems together out of spare parts - on the chance the problem may be RAM-compatibility related, try swapping out the RAM and/or test with single-sticks of installed RAM only. Try swapping out your network cables in case you have a bad crimp somewhere. Post your results back to this thread.

Good luck! :beer::D
 

TSCrv

Senior member
Jul 11, 2005
568
0
0
tested cable, fine there, well the server hasnt been formated in over 2 years, and its been needing one...

as for mixed interface for raid5, from my benchmarks a LONG time ago the scsi drives are the limiting factor, sadly enough.

as for ram, i dont have any ECC sdram, so im forced to use a single stick of my best non-ecc memory, and the server complains about it. (i dont think that should matter tho)

sell gigabit NICs and switch, that would yield me like $35... assuming i sold for half of retail price... anyways i cant do that becasue i also transfer from other computers running gigabit.

reading from my raid array is remotely fast, writing is what totally kills it...

i would buy 2 desent hard drives, but im trying to make the best of what i got, i dont have cash to buy even 20gb hard drives, let alone recreate an array of over 100gbs of stuff. money is the limiting factor, 2nd goes performance

heres a question, assuming i have some money to throw around, (bday cash comming soon) what would be the best approach to making a NAS out of what i have (either in this old dell or another machine).. i have 2 200gb sata drives in my main machine, should i buy a sata raid card and put it in the server? buy some awfully espensive scsi drives and throw them into the server? buy a few ide drives and rig them to the server (server has no ide, have an add-in card allready, with no hd mounting, drives just laying there slightly secured)

i wouldnt mind going raid-0/JBOD now that i have a feasable way to backup data, but its all dependant on what i decide to do, truthfully im starting to show my noobishness. meaning im starting to run out of ideas and i dont know the PROPER way to approach this situation...

im outta here, have to back up arrray and recreate as raid-0 instead of raid-5 to test if the calculating of the parity is the reason
 

dclive

Elite Member
Oct 23, 2003
5,626
2
81
Originally posted by: TSCrv
tested cable, fine there, well the server hasnt been formated in over 2 years, and its been needing one...

as for mixed interface for raid5, from my benchmarks a LONG time ago the scsi drives are the limiting factor, sadly enough.

as for ram, i dont have any ECC sdram, so im forced to use a single stick of my best non-ecc memory, and the server complains about it. (i dont think that should matter tho)

sell gigabit NICs and switch, that would yield me like $35... assuming i sold for half of retail price... anyways i cant do that becasue i also transfer from other computers running gigabit.

reading from my raid array is remotely fast, writing is what totally kills it...

i would buy 2 desent hard drives, but im trying to make the best of what i got, i dont have cash to buy even 20gb hard drives, let alone recreate an array of over 100gbs of stuff. money is the limiting factor, 2nd goes performance

heres a question, assuming i have some money to throw around, (bday cash comming soon) what would be the best approach to making a NAS out of what i have (either in this old dell or another machine).. i have 2 200gb sata drives in my main machine, should i buy a sata raid card and put it in the server? buy some awfully espensive scsi drives and throw them into the server? buy a few ide drives and rig them to the server (server has no ide, have an add-in card allready, with no hd mounting, drives just laying there slightly secured)

i wouldnt mind going raid-0/JBOD now that i have a feasable way to backup data, but its all dependant on what i decide to do, truthfully im starting to show my noobishness. meaning im starting to run out of ideas and i dont know the PROPER way to approach this situation...

im outta here, have to back up arrray and recreate as raid-0 instead of raid-5 to test if the calculating of the parity is the reason

Writing is where software RAID5 performance typically is poor; sounds like you've got a few problems including:
1. Inherently poor RAID5 performance, particularly with an old (1999) CPU (Pentium II/450?) - that one CPU must handle all XOR-ing.
2. Poor/unknown quality hard drives (1.5MB/s sounds a bit low - typical for old, slow drives might be 20-30MB/s) - Fix the drives - find a Fatwallet/AT Hot Deals thread and get one of those 120GB IDEs for $9 AR, YMMV, PM, Etc. hard drives.
3. Your RAM type doesn't matter. 512M is low, and more is better, but the problem is the drives' write performance can't keep up with the incoming data stream, so you get the problem you're having. I suggest a single, new IDE drive; forget about RAID for now, or if you want to get spendy, a single new SCSI drive, but that's a lot more $.
4. Confirm you have no issues with hardware interrupt sharing; old machines may have only marginal ACPI support, and so may have an issue. Try a new IDE hard drive with no additional PCI cards in the machine. Try that same hard drive with a borrowed add-in PCI IDE controller.

In short, you're seeing exactly what I'd expect when the drive is junk and can't write fast enough.

You might look in the event log (system and applications) to see if anything (any errors or warnings) are happening.

I wouldn't bother with a NAS. If you have a new, fast, modern machine, cram the hard drives in there, or else sell them all and buy a single new drive that's bigger. 400GB drives are out and about these days; 160GBs are going out at firesale prices.

If you had 2003, I'd suggest Microsoft's Server Performance analysis tool, but it won't run on anything but 2003 (sneaky!), and really won't tell you anything you don't already know.
 

TSCrv

Senior member
Jul 11, 2005
568
0
0
what u listed is pretty much the problems, actually its a dell poweredge 2300, basically 2 500mhz p3s... im lookin for more ram but atm cant find any...

when i said NAS i meant a pc with hds crammed into it... >.<.... i would NOT go out and buy a box where that is its sole purpose...

in the event log, the only problems are dhcp errors, no write or array errors, which was what i was hoping for... after i get home from school ill be throwin the thing at the wall and starting from scratch because i spent last night copying everything on the server to my main pc... so format and remake time...

heres a question... if the theorecical max speed per drive in my scsi setup is 80mbs per second per drive, can the bus support 80mb/sec PER DRIVE simutaniously, or only 80mb/sec for all the drives?
 

dclive

Elite Member
Oct 23, 2003
5,626
2
81
It isn't 80MB/s per drive - it's 80MB/s per channel. Assumedly you have one channel, hence 80MB/s total.

...which is plenty fast. If that's what you're getting, that isn't the slowdown. But I'm sure that's not what you're getting.
 

TSCrv

Senior member
Jul 11, 2005
568
0
0
so its 80mb per 16 IDs (15 drives plus controller), or is a channel only 8?

im getting like 1.5mb for the WHOLE thing writing speed, expect an update for format results (no net with server down)
 

TSCrv

Senior member
Jul 11, 2005
568
0
0
OUCH.... just started to format, and be4 doing so i think i found the reason, a drive that was "going" bad, is now bad....

i look at the timestamps and the FIRST problem is AFTER my first post (by 4 hours), which is very strange...

THIS is why i used RAID-5... event log and failed redundancy

hmm, the drive is missing atm, so that explains the bad performance now, but that doesnt explain the bad performance BEFORE i was getting errors, and i did look through about 2 weeks of the event log and nothing but dhcp errors

hopefully solved, if not ill be back
 

dclive

Elite Member
Oct 23, 2003
5,626
2
81
Originally posted by: TSCrv
so its 80mb per 16 IDs (15 drives plus controller), or is a channel only 8?

im getting like 1.5mb for the WHOLE thing writing speed, expect an update for format results (no net with server down)

Channel = one physical string of disks attached to a parallel cable going to the SCSI controller's ...erm...channel input pins.
 

TSCrv

Senior member
Jul 11, 2005
568
0
0
stupid me, i see.... well i put the last 4 drives in raid-0 config and it STILL pukes when im transferring files, so its not the array problems.... should i use the usuall "blame it on M$ bug"??? looks like its a good time to port to linux >.<

suggestions?
 

dclive

Elite Member
Oct 23, 2003
5,626
2
81
How is this a MS problem? You'll have the same problem in Linux! The issue is your drive can't write fast enough to keep up with your network, so it buffers all it can to RAM and then goes nuts when it runs out of RAM.

Take each drive individually and run a performance monitoring program of some sort on it. Then add another drive to the chain and test it and the previous drives. Eventually you'll figure out which is your bottleneck.
 

TSCrv

Senior member
Jul 11, 2005
568
0
0
rgr that, i was hoping to use the blame M$ excuse... ill look around for some hd benches, does any1 have any that would get the job done?