But I still wonder what they mean with the term data-at-rest protection. It should mean that every write command that has sent a completion to the O/S driver is considered data at rest.
Data not being written, physically or electrically nearby on the chip, could get corrupted during power loss, with writing going on during power loss. Developments in the NAND itself have been ongoing to deal with this problem, as well as work in the SSD controllers (IMFT have been vocal about such work, but surely everyone else is doing the same).
HDDs are not immune to similar (but not identical) issues, with unrelated data getting scrambled. But, HDDs don't risk bricking or total drive corruption from such an event (though hosing a partition can happen, however rarely).
When power is lost, voltages don't fall at the same rate across every wire. Operations currently going on can continue, but wrongly, if one or more high voltages turns to low during that window. So, you might want either a way to tag a write going on as bad, or a way to keep whatever it's doing that might go wrong isolated. If there's not enough power for the complete write, as long as it can reliably be rolled back to the state before that write began, then it's up to the file system to handle the rest.
So, yes, it should mean that if the SATA host controller got a success status back, then it aught to be protected. Generically, power loss protection means that if the drive acknowledged receiving the write command, then pulling power right then should still result in a successful write. So, their M-series drives specifying that it is not for all in-flight data is an important piece of information.
In higher-end servers, it has not been uncommon for RAID cards, and sometimes high-end HDDs themselves, to have a battery backup on them, so that any writes sent have time to complete. That way, the drives are all in an idle state during power off, even if the power off was due to the machine itself crashing, rather than a power outage. It's also done that way because many server systems handle remote information, and those remote nodes need to be assured that any sent data they got a successful ack for was actually committed (and thus why a UPS for the whole computer is not sufficient). Consumer systems, and most SMB systems, have none of this, and tend not to need it, either.
But that means they also have to protect data that move from the SLC cache to the MLC storage or other fancy stuff like that. I hope this is what data-at-rest protection means.
Making everything transactional would handle that fine, so long as there's no risk of screwing up a whole block, unknowingly leaving a half-written page, screwing up another page on another block, etc. etc.. IE, the data isn't moved, but copied, verified, then the pSLC portion marked as re-usable. Not any different than moving around plain MLC, which the drives all have to do regularly.
I have yet to see a consumer SSD that is capable of this, though some look like they have the bulk capacitance on-board to support a hot unplug of the drive.
A small handful of Intels are pretty much it, just because they share hardware w/ their enterprise market brethren. FI, the 730 from Intel is technically a consumer drive.