You clearly overestimate what can be done with a Lempel-Ziv FPGA with a power budget of about 1W. First of all, Lempel-Ziv is old compression technology, and even when run under ideal conditions (able to scan an entire file before generating the symbol table), it will not achieve a compression factor anywhere near 50% on the data most users have on their HDDs.
Second, the top throughput on the chart on the page you linked to is only 175 MB/s (1.4Gbps). So, we have the LZ algorithm with much worse than 50% compression, a throughput of only 175 MB/s, and some unspecified power consumption to achieve this thoroughly underwhelming performance.
I repeat, I find it amazing that anyone actually believes Sandforce's claim that an average compression factor of 50% will be achieved for the typical data on most user's drives.
The funny thing is, anyone foolish enough to purchase a Sandforce drive could easily test the average compression ratio. Say you have a 60GB Sandforce SSD that has 40GB in use. Take another SSD (I suggest a 120GB Intel X25-M or 128GB Crucial C300), and create an uncompressed tar archive (or equivalent) of the 40GB onto the 2nd SSD. Also create a 40GB random file on the 2nd SSD. Secure erase the Sandforce, and use dd or equivalent to write the random file to the Sandforce. Say that the speed is 100 MB/s. Then secure erase the Sandforce, and dd the tar archive over to the Sandforce. Say the speed is 111 MB/s. Then the average compression factor is about 90% (100/111).