modus
give me a break. you came to the conclusion that all hdd's are not reliable to the same extent, which is crap, because you can't make a reliability judgement based on your experience, you said so yourself. if you're going to eliminate reliability as an element because you think that you can't make a judgement of reliability without years of research and massive surveys, don't make a judgment on the equality of reliability of hdd's, which is exactly what you do. you have no logical way of coming to that conclusion. let me try to lay it out for you in a way easy for you to understand. for you to make that judgement, you would need, say, 2 years or more to conduct a survey. within that survey, you would need probably 10000+ references, maybe more. if failure rates came to say 5% ibm, 5.6 % seagate, 5.2 % maxtor, etc... then you could conclude that they are the same. you don't have a massive survey that you spent years on, so how does your experience make up for this? i guess your special.
"in the absence of any conclusive evidence as to the reliability of any brand of hard drive, we must eliminate reliability as a buying criteria and focus solely on the remaning factors: price and performance.
Why is that so hard to accept?"
i never rejected it. you're the one saying reliability is the same for all hdd's, not me.
"You've just given up a loosing argument on the main issue and resorted to petty semantic attacks on supposedly conflicting statements which in fact go hand in hand. That's pretty desperate. "
i just wanted to stay on topic, not start a scsi vs. ide debate which has been beaten into the ground, but since you insist...
first off, not everyone considering scsi is going to have multiple redundant backups of the same information. in a major server environment, most likely, but scsi isn't limited to servers. there are companies that put them into workstations for graphics and video editing, and they aren't going to have multiple backups. they most likely have a single drive. where do you get this info from? also, the many different raid implementations that are available for scsi don't even exist for eide. so if you're doing major serving, you're going to want scsi just because of the wide availability of raid implmentations, not just raid 0 and raid 1 or 0/1 combined, not to mention the scalibilty of these raid solutions. raid 0 and raid 1 just don't cut it in a lot of circumstances.
cpu utilization is a moot point. it isn't a factor, plain and simple. in a heavy multitasking environment, it might be more of an issue then in others. not going to argue with you here.
the one major benefit to scsi is bandwidth. uw2 scsi has been out for more than a year now, and eide is just catching up. u160 has 60% more bandwidth than ata100. strap on 10 drives in a server environment, each hdd going at once at their max, and eide runs into major bandwidth limitations(assume that 10 hdd's on one eide bus is remotely possible). scsi has no problem with it. scsi is also fully backwards compatible scratch the new lvd interfaces. ata66 has problems coexisting with ata33 devices. scalibility is another one. how many devices can you fit on an eide channel, 2, maybe 4 with the newer interfaces that come out. ever since uw scsi which was about 2+ years ago, 15 devices could have been put on 1 scsi bus. can you say that for eide? nope, not even close. scsi is also a much more flexible interface. name one thing that you can't put on a scsi interface that can be put on a eide interface. there practically isn't any. also, name how many raid implementations there are for eide. 3 that i can think of, raid 0 and 1 and the combination of both. scsi has about 6 or more.
as far as doing multiple reads/writes at once, you usually don't have 1 hard drive in a server environment. like you said, you probably have multiple drives running copies of the same information. while one drive does a write, the other does a read. the claim that the only time you save is the request for a command going through the interface is bogus. if a write takes 5 seconds, and a read takes 5 seconds, you save 5 seconds by doing both at once. you save the time for a and entire disk fetch which is godly slow already. given that they don't take this long, but when you've got 1000 or so requests for different files at once, then the time saved adds up.
LXi
you know the record for each team, so you should be able to make a judgement on who will be more likely to come out on top. let me put it to you another way, say we knew from a survey that maxtor hdd's have a failure rate of an enormous 50%. then say that ibm's were 10%. knowing this, it's safe to say you're less likely to have an ibm fail than a maxtor. that's just raw statistics, but that does not necessarily mean that it will happen, because inevitably there is some randomness, which i think you're getting confused with. if you had 1000 people buy ibms and 1000 buy maxtors, chances are the ibms will have a higher reliability of operation based on the statistics. same goes for the teams. if one team has won against another 70% of the games, chances are it's going to win again, but like you said, anything can happen, nothing is for certain. put another way, say companies A and B manufacture parachutes. A has a failure rate of 2%, B has one of 30%. now, which would you choose? according to your logic, it shouldn't matter, afterall, anything can happen right, 50/50 it opens or doesn't? if you tell me that you would choose B just as much as you would choose A, then you're just kidding yourself. you're going to want obviously brand A over B because you don't want to end up a smear mark on the pavement and you're going to use it's reliability record as a criteria of purchase. you're trying to completely eliminate reliability as a factor regardless of whether or not you know the statistics. eliminating reliability is fine if you don't know the statistics, but in your example you do, so you can't eliminate the past records for the teams.