I can see that being the case with the NAND manufacturer, but what about the SSD manufacturer? RyderOCZ mentioned a "bad block scan". What type of test is performed on the SSD once it's completely assembled?
I also have another question that's been on my mind. From what I know, when a sector or any part of the NAND fails, the location of the bad sectors is noted by the drive and blocked off. Part of the over-provisioning is then allocated to take its place. What happens to the data that was in this location? If the sector is bad, how does it know what data was there to be moved over to the over-provisioned location?
A bad block scan is only relatively minimal testing. Sure, you check each block once. But the problem with flash is that dead cells are just one particular type of fault, you get weak cells or leaky cells. They may only corrupt data with a certain probability (e.g. 10% of writes get corrupted at that cell), they may only get corrupted with certain data patterns, or the cell might be weak and fade prematurely, if there are a lot of "partial-page writes" (which weaken data already in the page).
Testing flash in a very robust way would need many read/write passes. A single write/read pass will pick out a lot of failures, so that they can be remapped, but may leave the weaker cells undetected. The problem is that multi-hour burn-in testing is simply too expensive for most consumer level drives, although I can imagine for enterprise drives this might be done.
In terms of the reallocation issue, flash is, by design, an unreliable storage medium. There is an expected bit-error rate for reading flash - which is in the region of 0.001-0.1% (normally towards the bottom end, but a well worn sector on a bad day, might be towards the top end).
In other words, that means in a typical read of an 8K sector, you might expect 1-50 corrupted bits. To get around this each sector is significantly oversized - so, although the flash is specified as having "8k" sectors, meaning 8192 bytes in a sector, there are typically 8640 bytes available in each sector. In the extra 448 bytes, the SSD controller will store ECC/parity data (and, usually, various internal control data, write counts, so that it can keep track of write amplification, wear levelling, etc.).
The ECC is calculated from the original data by the controller when it writes the data to the sector. In the event of data corruption, the ECC can be used to detect and repair the corruption before the controller sends the data to the host PC.
The SSD controller will monitor the error rates in individual sectors, every time it reads a sector. If the error rate in a particular sector is high and, if the data was allowed to deteriorate would be at risk of exceeding the ECC's repair capability, then the SSD controller may, after recovering the data with ECC, copy the recovered data to a spare area and retire the failing sector.
In the unlikely event that a sector had suffered catastrophic corruption, and the ECC could not recover it, then the only option to the drive is send the "bad sector" message to the OS. In the event of major corruption, the drive has to 'fess up to the OS that it has lost the data (which is what "bad sector" means). If a drive testing tool zeros out, or saves new data to the "bad" sector, this will usually trigger a reallocation event. (There's no point reallocating a sector, if the data has been lost - might as well wait for fresh data to come along).