• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

CDR server NIC bandwidth slow

Dorkchop

Junior Member
Hey all,

Apologies for the length -- figure more descriptive is better than less.

I have a cdr server I'm working on, and have most bugs worked out, but still am fighting one that is somewhat odd.

I have a P4/2.4 Celeron, 1 Gig (2x512) memory, on an Asus P4PE type board running win2k, sp3. There are 15 Lite-on CDR IDE drives connected via Firewire-IDE adapters to 4 PCI firewire cards (4 drives on 3 boards, 3 drives on 1 board). I have 6 WD 40 Gig UDMA5 harddrives connected in master/slave fashion to a High-point Rocketraid 404 quad channel PCI controller (first 6 drives on first 3 channels). Note, I'm **NOT** using any RAID capabilities of the card. They are all independant -- think of it like a plain quad channel IDE controller. I also have a single drive connected to the onboard primary IDE controller, and a dvd-rom connected to the onboard secondary ide. I have an ATI agp video (rage something). Finally, I have a Intel Pro 100M 100Mb ethernet adapter. This whole thing is currently driven by 3 300 watt power supplies, evenly distributing the load. This whole thing is in this monster case.

Yup, pretty cool system 🙂

The system is networked using a 100MB switch.

Using Padus disk Juggler, I'm able to burn 15 DIFFERENT cd images simulataneously at 8x speed. I do this by putting 5 images on each WD drive on each channel of the HP controller. So, maybe channel 1 primary will have 5, then channel 2 primary will have 5, etc. I tried a big-ol RAID 0 with 4 drives, and it couldn't supply enough info fast enough for 15 different images I guess... it would bottom out at approx 2-4x average burn speed for each of the 15 cdr drives.

Anyway, during this full burn time, the system reports about 500-600 Megs of ram free, CPU utilization is right around 20-30%. Virtual disk use is right around 2.5% (located on that single boot drive). Size is 1.5Gig

So, here is my question/frustration. Whenever I attempt to use a network station to push files onto any other unused drive on my cdr server while doing a full burn (15 drives all with different source image files at 8x), I get pitiful transfer rates -- like 128KB-512KB / second -- that's maximum 1/2 Meg per second?!? Note that during that time, system resources really don't change -- cpu stays about the same, memory about the same, and the burn speeds don't change -- stay at 8x. Note that no other system was using either my workstation or cdr server network bandwidth.

My initial thought was that maybe the PCI bus was simply saturated. I'm not sure WHY that would be, when it's supposed to handle up to 133MB/sec, and I'm pushing .150x8x15= 18MB/sec for the cdr drives, plus roughly double that for the read, so, 36MB/sec. Figure a little more for other crap, and maybe 40MB/sec max.

So, while still running full burn, I thought I just take a spare image file I had stored on my primary boot drive, and copy it right back to that same drive. Well, wouldn't you know that copied with full rate of approx 25MB/sec (figure 14-15 read and 10-11 for write). I'm guessing that's full blast for that drives capabilities.

And, that is still well within reason on the PCI bus. Oh, and, just for fun, I clocked 105MB/sec to 110MB/sec when I did have that RAID 0 using those drives. So, I know the bus can at least partially handle those rates.

Soooooo.... why would my net bandwidth suck, and file bandwidth be ok during the same state of processes. I did note that my net bandwidth did increase somewhat inversely proportionate to the number of cdr jobs going -- ie, the more jobs I aborted, the better my rate got, until I had the full 10MB/sec.

I suppose the answer could somehow be related to "why can I do 15 jobs at 8x, and not 15 jobs at 12x -- that's only 27MB one way, so 54MB for the read and write. Figure -- 60MB/sec max for the entire operation. But, I think this latter question is due to limitations of the individual harddisks.

My other thought leads me to believe that perhaps I'm running into some sort of PCI bus interrupt saturation.... it seems like copying files requires less interrupt overhead than network access....

I'd appreciate any info on this...Hopefully I was descriptive enough 🙂

Thanks!
Jon
 
sorry for the short post...

check your interrupts per seconds and work queue.

With no scsi (quick read if I'm missing something) I'm guessing you are interrupting the proc like crazy.
 
spidey, I'll take a look at the interrupts and queue.... it does seem like this has something to do with processor interrupts/cpu time slices.

n0cmonkey, speeds are great. 10MB/sec throughput (full 100Mb) on the NIC when copying during no burn time.

Thanks!
Jon
 
Back
Top