(a)partly true, except for SF drives (that benefit more from unallocated space), because having some additional space dedicated to over-provisioning means that the controller won't have to wait for the OS' TRIM command to execute, whichs can make a difference on mostly full drives. Which brings me to the next thing: what partitioning to a lower size does for everyone, instead, is force you to make a better use of your drive: laziness and/or superficiality are often cause of les-than-ideal usage. This way you can make sure that your drive will stay healthy.
Alternatively, you could get a drive, and use it. If it can't stay healthy by itself, it was either not a very good drive (Indilnix or JMicron controllers, FI), or a poor choice for the workload. It is the manufacturer's responsibility not to make a drive that will become unhealthy from being used, but many memory companies throwing together drives haven't thought much about that sort of thing (for that matter, even other companies have been bit by their oversights, like Intel and Crucial, but they improve the next gen, if they can't fix the current ones, instead of doing nothing, or getting out of the market). The manufacturer should make sure that there is enough OP,
without any user intervention, to prevent sky-high average WA with any workloads that are not specifically server workloads.
Quite a few Linux RAID users would have been much happier with their SF drives, if they could have just erased them, and given up a few GBs when repartitioning--or, if they knew the drives weren't going to live up to their specs. For whatever reason (maybe most or all just don't treat unallocated space as OP?), they seem to
specifically benefit from TRIM, and get slow without it.
And, anyway, the drive doesn't have to wait for TRIM, unless you are constantly filling it all the way up, then freeing a little bit, at which point you should consider using the SSD as an RST cache device, instead. You should leave a good bit of space constantly free, which will benefit both your filesystem and the SSD. If you can't, then either go with RST, or get a big enough SSD that you can leave free space during normal use.
SF drives have additional advantages, due to how they work with data compression and how TRIM behaves differently on them. Most of all, filling up a SF drive completely with incompressible data results in a permanent reduction in performance: not particularly likely, but better safe than sorry;
Better to choose another drive, if worried about it. Those advantages come with major caveats, when the drive isn't used an ideal environment (such as being single-drive volumes with Windows 7 or newer).
(b) not true, small writes are actually most affected by provisioned space;
Please do explain. How is going from say, 8GB spare, to anything higher, really, going to make much difference? It's not. It needs to write that 16KB, and write metadata updates for that 16KB. It almost surely already has blocks ready for such data to be written to. Going from 400,000x the spare space to 1,000,000x the spare space can't do that much.
With many small writes coming in quickly, as a large aggregate write set, such as may be common for some servers, it can (specifically, it should be able to detect the write pattern, and pre-allocate extents, doing clean-up for partly-used extents later on). But when the total is still only several GBs/day, and never will be tens or hundreds of MBs in succession, as many small writes, it's just not going to make much difference. The overhead of remapping is going to be a large portion of the write, and general runtime GC efficacy will make the rest of the differences. How much may vary by drive, and the total writes will still be lower on SF drives, but it's having to keep up with the metadata and new locations that does it, for small writes, much more than OP, simply because the OP is already
huge, relative to the write load.
Their numbers aren't all wrong, but cherry picked advertising material, largely based around server benchmarks, with mostly-text data.
(c) true, but not having what makes the most out of something doesn't mean we can't make a bit for ourselves anyway.
You need to reboot to change partition sizes on your boot drive, in the best case. With TRIM, you just leave free space, just like with a HDD. Using TRIM only let's you have your cake and eat it, too (on desktops, anyway); using all the space if you need to, right now, but benefiting from leaving it free.
Haha, no, but I see your concern. Without being over dramatic, drive endurance over multiple years is an issue, especially because SSDs are often re-used on laptops after desktop dismissal. I'm speaking out of a x25-m g1 failure due to bad blocks after 3 years or so; yes, controllers have improved a lot since, but newer processes are bringing us NAND capable of less and less read/write cycles, not to mention TLC drives.
And that is why I'm not going to use TLC, at least not until this "DSP" stuff has proven to really be able to increase write life to or beyond ~3K. The reduction in cost is simply too small for the risk, IMO. If it's working, and not PATA, I can find a use for it.