Odd LVD SCSI performance [EDIT: the plot thickens...]

Sir Fredrick

Guest
Oct 14, 1999
4,375
0
0
I have an ASUS P2B-DS which has integrated SCSI based on the Adaptec 789x chip.

I have:
IBM 18ES DNES 9.1 GB 7200RPM
Quantum Atlas 10K 18GB 10000 RPM
Both of these drives are LVD, properly terminated on an LVD cable, and recognized by the Adaptec controller as LVD at bootup.
I'm running windows 2000.
Also on this system, on the narrow 50pin channel are:
Plextor Ultraplex 40MAX
Plexwriter 820

The problem:

Using adaptec's EZ-SCSI 5.01a SCSIbench, each drive gets about 26MB/sec transfer speed when tested seperately, but if they are tested simultaneously, their speeds drop nearly in half to about 15MB/sec.

Now perhaps I am under the wrong impression here, but I thought that SCSI devices could share bandwidth nicely...LVD should give these guys 80MB/sec bandwidth to share, but it seems like they're sharing only 30MB/sec

Any ideas why this might be? Thanks in advance :)
 

Radboy

Golden Member
Oct 11, 1999
1,812
0
0
The 80MB/s number is max theoretical. In reality, you have certain "overhead" req'ments, which lowers potential data x-fer to .. I'm not sure .. but 65MB/s might be a more realistic number.

Then, if you look up each individual drive at the manufacturers web site, you might find that their respective max STRs might be lower than what EZ-SCSI shows. For example, I don't think the ES drive series is capable of sustaining anything close to 26MB/s. Okay, just checked IBM's site .. they say the 18es sustains 12.7-20.2 MB/s .. so 16MB/s would be roughly average. Benched an 18LZX here .. for comparison.

Maybe you should try HD Tach for more accurate HD benching.
 

Sir Fredrick

Guest
Oct 14, 1999
4,375
0
0
You're right, the 18ES is not that fast for normal transfers, but I was specifically trying to see what the combined max transfer speeds would be, so I set it to the maximum transfer size (64Kbytes), and chose "same sector I/O"

I would bench with HDtach, but it won't test the drives simultaneously.
 

Radboy

Golden Member
Oct 11, 1999
1,812
0
0
If each drive can sustain an average of 15MB/s, then it seems reasonable that both, together, can sustain 30-ish.
 

Sir Fredrick

Guest
Oct 14, 1999
4,375
0
0
yes, but with this very same test, each drive individually is sustaining 26MB/sec. When combined that's cut in half.

See, what the test is doing is reading the same sector over and over, so I believe that the drive would put this data in it's buffer, thus this would be a test not of how fast the drives can read data but how fast they can transmit data from their cache, which is exactly why they are on par. The Atlas 10K is faster in sequential reads, but they are almost exactly the same when it comes to same sector reads.

The only reason I can see why their performance on this test would be cut exactly in half while both are being tested simultaneously is if they were limited to about 30MB/sec which had to be shared between them.

Here are my test settings:
Transfer Size - 64KBytes
Transfer type - Same Sector I/O

Here are my test results:

IBM:
This time I didn't get anything lower than 30MB/sec, ranged from about 30MB/sec to 30.6MB/sec

Atlas 10K:
From about 29MB/sec to 29.3MB/sec


IBM & Atlas 10K:
Both getting from 15.2-15.4MB/sec
actual values when I stopped the test:
IBM: 15296
Atlas: 15264

Total Combined Throughput: 30560 KBytes/sec
 

borealiss

Senior member
Jun 23, 2000
913
0
0
there's going to be some performance degradation when you have 2 drives going at once on the same scsi channel. i experienced this with my integrated 7890 uw2 scsi controller on my mainboard, but i have not benchmarked it. i know that when i defrag the ibm hard disk, even though it is separate from the system disk (a quantum), the entire system slows down when it comes to disk accesses, such as loading images in photoshop. i think it has to do with the hard disks being on the same cable, or perhaps channel. so i hooked up one of the hard disks to the second ultrawide channel, and the performance is much better now. since the hard drives never even reach 40 megs/sec, i don't have a problem. granted that this is kind of hack, but i think you can expect some performance degradation when both hard disks are doing simultaneous reads/writes on the same scsi channel, although to what degree, i can't say because i haven't benched it. transfer rates on your disks aren't the most important thing though, as long as access times remain consistent. try defragging the hard disk without your windows directory on it, and then perform normal tasks without launching anything on the hard disk being defragged. see if you notice a slowdown, if not, i wouldn't worry about it.
 

Sir Fredrick

Guest
Oct 14, 1999
4,375
0
0
Interesting that you have observed the same effects. :) I was under the impression that one of the benefits of SCSI was that the devices were able to share the bandwidth nicely. If they can't do that then it is indeed impossible to even come close to the theoretical max transfer rate. I'll have to try putting the IBM on a wide channel some time, see what kind of rates I get them. I know it doesn't have much affect on its actual real world performance, but it just doesn't seem right.

I tried defragging the IBM (which I'm not actually using currently) and there is no noticeable performance decrease on the quantum. :) I haven't tried loading any huge files though, just starting programs and such.
 

borealiss

Senior member
Jun 23, 2000
913
0
0
if there's no performance decrease that's noticeable, than perhaps there's an inaccuracy within the benchmark itself. i know some benchmarking programs will work for some people, and not for others. hd tach gives me a cpu usage of 1.2% when i bench my drives, but if i have ms word or even rc5 running in the background, it goes up to 15%, which is totally innaccurate.
 

Sir Fredrick

Guest
Oct 14, 1999
4,375
0
0
Well, I have had a chance to do some more testing and the results are very interesting... I'm now testing:

Atlas V
IBM 18ES (same as before)

on:

Athlon 750
Adaptec 29160 SCSI card

test results:

seperate, sequential I/O, 64Kbytes:
Quantum: 28820
IBM: 19650

Running together, sequential I/O, 64Kbytes:
Quantum: 28756
IBM: 19650
Combined: 48406

Seperate, Same sector I/O, 64Kbytes:
Quantum: 79464 (wow!)
IBM: 67323 (also wow!)

Combined, Same Sector I/O, 64Kbytes:
Quantum: 34955
IBM: 34667
Combined: 69622

Why are my scores so much better with this system? Is it the SCSI card?
 

borealiss

Senior member
Jun 23, 2000
913
0
0
well...i don't know why your combined throughput for the second system might be higher, maybe dma is enabled on the one with the card and dma isn't on the other, i dunno. that shouldn't really matter much though. but your same sector i/o scores are really high because the data is being read from the drives cache, i think. this is what happens with same sector i/o on some benches that i know of. that's strange that your scores are that much different for combined. maybe a scsi card is a better performer than an integrated chipset? anybody know about this?
 

Xanathar

Golden Member
Oct 14, 1999
1,435
0
0
Have you checked inside the Adaptec's BIOS to see if you are limiting the transfers to 40Meg/s Your numbers seem to be telling me that you arent at 80Meg/s, but actually 40. This is commonly a problem with a jumper on on of the drives, which reduces the chain to 40, or the BIOS on the adpatec card set to 40, and not 80.
 

Sir Fredrick

Guest
Oct 14, 1999
4,375
0
0
There is no DMA setting for SCSI drives :)

No, the BIOS isn't set to 40MB/sec.

I know that the same sector I/O tests the burst speeds/transfer speeds from the cache, but why would that be faster on one card?
 

borealiss

Senior member
Jun 23, 2000
913
0
0
Sir Fredrick

there are dma modes on scsi cards that support them. i have it on my isa scsi2 card from adaptec. adaptec also has multiple cards that say "scsi dma" right on the technical specs on their website. as far as i know, there is absolutely no way to enable this in windows 2000, but running a dma check program i have, it can be enabled in NT4. afaik, dma is just a way of transferrring data, and not exclusive to just eide. the only thing i can guess as to the difference in performance between the integrated chipset and the actual card is that maybe asus implemented the chip interface to the bx chipset differently than adaptec did. i know the drivers on my mainboard for my scsi chip are different than the ones for the card version of my chip. on adaptec's site, it says don't download card drivers for integrated chipsets, as it will disable the device on you system.
 

monopoly

Senior member
Feb 1, 2000
436
0
0
Been running SCSI for several years and believe it to be accepted knowledge that the dedicated SCSI card utilizes its own on-board RISC chip for I/O. Whereas the integrated SCSI (Symbios or Adaptec - ie onboard motherboard chipset) handles the SCSI fetch/put protocols but, the actual work is handed off to the mainboard CPU. This still allows the multitask processes SCSI is known for but, at a reduced thruput. Also be aware that integrated SCSI is held to one-channel per I/O rollover where dedicated SCSI cards (and their RISC) can handle multi-channel.

Another issue is cabling as mentioned above. Especially true if putting SCSI3 on a LVD channel. The whole cable defaults to the lowest (and possibly lower) thruput.

I may not have laid this out in the best of terms but, you get the idea...
 

Vinny N

Platinum Member
Feb 13, 2000
2,277
1
81
Sir Frederick:

What about the jumpers and cables? Are you certain they're all correct?
It's too bad you can't update the scsi bios on the motherboard to the newest one that the controller cards have, the newest bios displays at time of detection, the transfer mode the scsi device is in.

What about your version of ASPI? Are you running the newest 4.60(1021) version?

What about the onboard SCSI's actual driver?

After I installed the newest ASPI and the newest drivers from Adaptec's site for my 2940U2W, I noticed things such as HD Tach(2.61 was the version I was running) reported 0% cpu utilization regardless of what programs were running in the background, instead of 4 to 11%. (the 2940U2W's bios was the same, only the aspi and drivers changed)

Perhaps the other system with the 29160 had newer/better drivers and updated ASPI?

Where could I get EZSCSI SCSIbench? :) I'm curious to see what results I can get.
 

Sir Fredrick

Guest
Oct 14, 1999
4,375
0
0
EZSCSI comes with the retail version of the adaptec cards, I don't know where else you could get it, sorry.

I'm using Win2k, so the ASPI layer is different (according to Adaptec)...they don't appear to have any updates available for it, do you know where I could get them? I am using the updated drivers for the card, and contrary to what you may have read, I can use the latest updated drivers intended for the PCI card, they include drivers for the AIC-7890/7891, and this one is the 7890. In order to get that I simply have to download the AHA-2960U2W drivers for win2k.

The BIOS does display what mode the drives are in on startup, and it confirms that they are both running at Ultra2/LVD, while the CDROM and CDR are running at FAST/SE
 

borealiss

Senior member
Jun 23, 2000
913
0
0
you know what, i was thinking about the firmware for pci vs. integrated. not the drivers. brain fart.
 

soulm4tter

Senior member
Nov 6, 2000
967
0
0
I would disable that onboard crap and get a good controller card. Your performance will be much better and you can always use the card in another system when you are ready to upgrade. I hate the recent trend to integrate RAID and SCSI controllers. The performance is usually lackluster and you can't use the controller when you go to upgrade. Its just a cheap gimmick for mobo makers to claim you'll be getting great performance without wasting money and a PCI slot on an add-in card. I hate it.