I am not sure where you are going with this. Software RAID is another animal. Hardware controllers typically have a 10second timeout. Most disks with tler set to 7 seconds will fail properly in this time allowing the RAID controller to do its job, IE rebuild the sector and take what ever recovery actions it needs to do.
1. I am referring to an already degraded array. The controller cannot do squat because it's already on its last copy of all available data. If there is a read error on a degraded array, there is data loss.
2. Hardware RAID is usually anything but except better cards in the $150+ range. But if that's what you're using, they are configurable. If they're not configurable *and* you're using consumer disks, then you're not a good sysadmin.
3. Controllers don't rebuild or reallocate sectors. Upon read error, either software or hardware RAID will find the data from mirrored copy or reconstruct from parity, and then write the correct data to the LBA on the disk that previously reported the failed read. It is the disk itself that reallocates the sector *IF* there is a persistent write error and there are spare sectors on the disk.
Your jab at me about a bad sysadmin is misdirected because it was you that has fabricated that I am doing this.
I never said you were doing this. I never called you personally a bad sysadmin. It's a blanket statement that people mismatching their hardware are bad sysadmins. And we see this *all the time* on various forums where people were pairing WDC Green drives in RAID 5 configurations and whining about multiple disk failures. They wanted cheap, they got cheap. And when told these disks were only meant for RAID 0 and 1, not RAID 4 or 5 or 6, they complained more.
I simply said: drives with 90 second TLER values will more than likely get booted from the array.
Untrue statement. In the consumer realm, many of those "hardware RAID" are actually software RAID on a card or in an enclosure, and are quite tolerant of high SCT ERC recoveries.
I am not as well versed in linux but the Windows software RAID will also boot the disk before 90 seconds. It will not wait for a conclusive read error if the disk appears to be hung. If the disk appears hung, it boots it and performs recovery actions.
How it's justifiable to have software RAID attempt fast recovery on slow recovery drives is bizarre.
This does not make sense at all for a software RAID to behave this way, so I find it difficult to believe it's true. Software RAID is overwhelmingly the domain of pairing with consumer disks, widely known to have slow error recoveries. For Windows to do this with RAID 0 arrays would cause large numbers of total array failures, yet I haven't seen this at all. And further, it doesn't make sense for the underlying software RAID to be less tolerant than NTFS, which even if it receives a read error from a disk (not in an array) it will insist the drive retry. So you can get really long recoveries on Windows with NTFS because of this.
Hardware RAID controllers generally perform the same way except they tend to be even tighter ie the near standard of 10 seconds before booting a non-responsive disk.
That is a piece of hardware that comes from a world that expects to be paired with enterprise disks. Those disks have vastly better ECC than consumer disks. If they haven't corrected the error in even a few seconds, it isn't recoverable data. That's why they fail quickly. That is simply not the case with consumer disks, which is why you get retries. Their ECC isn't as good. It's much much slower. So my point is if any person buys hardware RAID controllers with 10 second timeouts, and they pair it with disks that have longer than 10 second recoveries, they're bad sysadmins.
In enterprise installations, this whole question never comes up. Everything works exactly as designed. The problem arose when people started buying consumer drives with slow error recovery, and put them into a situation where they'd get booted sooner than they should.