The common understanding is that fragmentation for SSDs doesn't play any role and has no negative impact on performance.
Except that file system fragmentation adds work to the file system layer. It makes a small list bigger, meaning more work for the CPU, at the least. So, obviously, from a purely theoretical PoV, it's going to make a difference. Also, reading very small chunks of data is slower for an SSD than larger ones.
The short version goes like so: if you access a bunch of scattered little pieces of data
anyway, what does it matter if the files they belong to are in many fragments or single chunks? It doesn't.
Like Phynaz said, random basically equates to, "small." It's rare that that some read or write is going to be unpredictable in nature, yet large, so it works out to be mean both. If it's large and contiguous, then it's a big sequential series of bytes, and is no longer suitably random, once you get started transferring it.
However, for SSDs, sequential read speed is normally significantly faster than random read speeds.
Exactly. Sequential reads can take maximum advantage of the RAID-0-a-like setup inside the SSD, and will have less CPU overhead, and less SATA overhead. Wins all-around, right? Well, only if it made sense to access all data that way, which it doesn't.
It's not like a developer goes, "oh, let me make this all random," but rather, that the varied parts of related data need to be accessed in some pattern that the user has decided, that could not be predicted beforehand, and thus creates a bunch of little spread out requests. Fragmentation hardly ever matters for HDDs, anymore, but when an HDD can do that work at say 5MBps and 0.2s, an SSD can probably do it at 100MBps and 0.01s.
Since sequential speeds for a SSD are significantly faster than random ones, this means that fragmentation on a SSD also has a significant impact on performance???
Let
fragmentation be when a file is, or files are, split up into pieces much smaller than is ideal for reading them back quickly (I think MS' uses 64MB as the cutoff for big files). The smaller the average size of fragment, and the more fragments there are, the worse that gets. That is to say, we really don't care about the SSD's internal mappings, and generally assume that, so long as the IOs are large enough per file chunk, they will be more or less ideally laid out. If the IOs aren't large enough for an ideal layout, then an ideal layout is probably not reasonably possible, so that case won't matter anyway.
We only care about what is directly visible to the OS: the the way the files are scattered about LBAs.
As files get split into more and more fragments, more and more read requests are needed to read them, resulting in more CPU work, more SATA traffic, and more reads that are suboptimal from the standpoint of the SSD's performance. So, yes, that's there, and it's going to slow down an SSD, just like an HDD. Well, if every file were written and read sequentially, every access, and you never left more than a few KB of free space on your drive, it would be a big deal. But, that's not the case. In fact, much of the time, the files aren't big enough for anything like the sequential speeds in benchmarks to come into play at all, and just leaving a little free space keeps fragmentation at bay.
It's less that fragmentation doesn't matter for SSDs, so much as it is that fragmentation usually just doesn't matter at all. Even with HDDs, much of the concern over fragmentation is historical inertia.
Any decent non-embedded OS with virtual memory, like Windows, will load files using a method referred to as demand paging. Parts of data on the disk are loaded into RAM on demand, rather than entire files (unless the files are very small, of course), and those parts are mapped to offsets within the file. The program doing the reading or writing doesn't have to know or care that only, say, 320KB happens to be loaded right now. If it tries to read part of the file that's not loaded, the OS swoops in, loads it, and maps it.
To try to make this faster, if the OS detects a sequential read of a file, it will go and read ahead more. but, it will do more than that. It will also try to find other patterns, such as strides (like reading 32KB every 3MB, FI). So, if you have a program reading into a big file, and you have to wait on your storage device, usually it's not just reading it end to end.
Now, let's say the access to said file is not so readily predictable, like your inbox. Each message varies in size, its location is likely historical, rather than making any sense based on how it will be accessed, and so on. Now, let's say you've got another program that is reading a bunch of small files, instead. What's the difference?
Effectively, none.
Now, let's say that big inbox of yours is badly fragmented. Since reading inside of it is not sequential
anyway, and the OS is going to have the full list of fragments in RAM
anyway, how much difference will that really make? Not much, except in terms of backup speeds from an HDD source.
Well, many programs, from Windows Explorer to most web browsers, to just about everything else, nowadays, usually either fits into the above, or is in a category where it's so easy for the OS that it will have everything needed loaded into RAM, and it won't be an issue either way.
So, fragmentation or not, you're going to be dealing with small IO sizes, not predictable enough to hide the latencies of, when you wait on storage. Even when reading sequentially, Windows has long been able to stitch files together out of order, loading fragments by LBA order, so as long as fragments aren't too small, having many is no big deal even for that case.
Now, the historical bits.
Windows, up through XP, fragmented everything badly, and needed constant defragging. *n*xes didn't need this. With Vista, MS got it together and implemented better write allocation schemes to combat this problem. Now, if you leave some free space, you won't get fragmentation any worse in Windows than a typical *n*x. Some programs can still be bad about it, which is part of why MS hasn't gotten rid of it, but 99% of the time, it's a non-issue.
For example, from mid 2011 until just a couple months ago, my WIn 7 C: drive was 1TB HDD, with less than 50GB free (it was 1TB with a lot more free, until mid 2011

), and I had defragging disabled, so as not to become an annoyance (idle-time HDD chatter drives me up the wall). Every 3 to 6 months, I'd analyze it for defragging. Most of the time, it would not recommend it, and the fragmented file list was quite small.
Up through Vista, it was common to not have support for NCQ, which allows many read and write commands to be in flight at once. With NCQ, any set of reads or writes not dependent on some code running before figuring out the next one can all go at once, and the drive can figure out the best way to perform them. With 7 and newer, support for that is common.
IO that is predictable but sparse, and that is unpredictable, are both best requested in their respective small pieces. Modern HDDs and SSDs can read these and start sending them back before finishing reading future requests (basically, what NCQ does). The kind of IO patterns that fragmentation would cause for sequential reads is common
anyway, and HDDs are much better at handling it than they were in yesteryear, to the point that it's hardly ever a practical issue, and is instead mostly a contentious point amongst overzealous tweakers.
The big difference is that a typical HDD can only do around 150 IOPS, and with mixed reads and writes, may go into some seconds per request to manage even that; while an SSD can do 10,000, 50,000, or even 100,000 IOPS, and most won't breach 5ms with full queues of mixed reads and writes--with just reads or just writes, not even 1ms. In lightly loaded cases, the SSD will be more like 150-300 microseconds per request, while the HDD will be more on the order 12,000 to 20,000 microseconds. So, in those few corner cases where fragmentation might cause you problems with an HDD, an SSD is going to be so fast as to totally hide it from you, unless you are going out of your way to benchmark some really crazy bad fragmented files.