Memory Compression

sm625

Diamond Member
May 6, 2011
8,172
137
106
Why isnt it done? I've looked at enough debugger windows to notice that most memory is filled with highly compressible patterns. So why doesnt the IMC compress it before sending it to the DIMMS? And then decompress it when it is read back. The increase in bandwidth should more than compensate for the latency increase.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
When 16 GB can be purchased for $80, why bother adding complexity?

And no, increased bandwidth does not offset an increase in latency.
 

dbcooper1

Senior member
May 22, 2008
594
0
76
Years ago, there was software available for both the PC and Mac that did do this, but the parameters have changed. Just not worth it anymore.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
vmware esxi does memory compression to prevent paging. also when you run many of the same o/s it will dedupe blocks until they change. so you can overcommit greatly if you are running 50 XP VM's on 72gb of ram with 2GB to each no problem all day long. The problem is you have 12 cores(24 threads) of goodness but you are choked on disk iops so if you can dedupe ram and compress you can reduce the chances of paging which has a pretty severe impact on vmhosts.

sql server 2008R2 can compress the database and that means the buffers are compressed too! many databases are highly compressible 50%+ so that means your 8gb of ram buffers can do near 16GB of work - just put those idle cores to work.

so yeah compression is very real and alive still. deduplication is just as importance. dedupe first then compress. I am suprised windows 8 doesn't use this more now that they are going to push to lower ram systems (arm desktop with 1gb of ram versus a 4-8GB core series cpu). If you think about it any time the system is busy the cpu could churn some compression and deduplication.

Maybe when windows becomes a hypervisor desktop they will introduce the esxi-like dedupe/compression technology back in. Compression of storage, ram and network all benefits everyone imo.

Sandforce uses deduplication i'm sure of - not sure if they use RLE or LZW compression but it seems like it when you send tons of highly compressible data its write speeds go way up since it isn't writing as much data
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
I think storage dedupe is more valuable than for memory. I tested an appliance from one of the largest dedupe device vendors. Our backup set was 6 TB, and 30 days of daily full backups consumed 4.6 TB on disk. Logically it was about 170 TB of data.