Please read about how ssd's store data. Having your data occupy contiguous blocks does NOTHING for access or write times. Think of how much time you'll save to organize your sock drawer.
I don't agree completely.
The difference defragmentation makes, is that the system believes that a certain file is contiguous, and will thus generate a a series of access commands that are sequential. If the OS just sends what appears to be random blocks, then the SSD will usually read data a bit slower. The SSD only knows blocks, and not files, so it cannot differentiate between the two different types of accesses.
Now, I assume the difference is mostly because drives will internally predictively read ahead, and then quickly deliver the content without reading flash for every ATA command.
But, to reach the point where an SSD is that fragmented, that you would actually humanly notice the difference, is going to be extremely long. So unless you have a known stressing/fragmenting workload which features regular sequential reads, (which you cannot back-up and subsequently recover, while you reset the drive) you do not need to defragment.
As for a tool which defragments: An SSD defragmentation would simply need to update the LUT, not the actual content of the flash. Alternatively, a solution would be to do away with hardware controllers and break the layers, have the FS deal with flash natively. Then you'll have a direct link of file to flash cell, and you do not need to fragment anymore, since the LUT knows which cells it has to look-ahead for, when a large file is queried. This would move the physical linearity that is limiting in hard drives to logical linearity, where sequentiality is replaced by a notion of contiguousness - one file, one set of flash addresses.
Won't happen though, because you don't usually want to break the layers.