From Fedora Project Wiki

Nitin Gupta

  • Student: NitinGupta (nitingupta910 at gmail dot com)
  • Assigned Mentor: RikvanRiel
  • Project Home Page
  • Project Mailing List: linux-mm-cc at laptop dot org

This page summarizes project I started as part of Google Summer of Code 2006.

Synopsis

Compressed caching is division of main memory into two pools: An 'uncompressed cache' of pages in their natural, uncompressed representation, and a 'compressed cache' of pages in compressed form. It inserts a new level into the virtual memory hierarchy where a portion of main memory is allocated for the compressed cache and is used to store pages compressed by data compression algorithms.

Storing a number of pages in compressed format increases effective memory size and this enlargement reduces the number of accesses to backing store devices, typically slow hard disks.

This method takes advantage of the ever increasing gap between the CPU processing power and disk latency time, which is currently several orders of magnitude slower to access than main memory. This gap is responsible for underutilization of the CPU when the system needs exceed the available memory. Thus by increasing the effective size, number of disk access can be decreased resulting in increased CPU performance.

Benefits to the Linux Community

When application working set does not fit within available RAM, its performance degrades painfully. Compressed Caching technique effectively increases the size of available RAM and helps keep application working set from being swapped out to slow disks. Thus, all application domains where available physical RAM space is not enough (under high pressure), will benefit from the performance enhancement provided by this feature. Also, it will incur minimal or no overhead for cases where tons of RAM is available.

Apart from this general applicability, following are some more interesting scenarios:

  • LiveCD environments: Here, the swap space might not even be available. Also, CD-ROM access is extremely slow. Thus, large no. of programs can be pre-fetched and compressed in memory resulting in much better performance.
  • Virtualized platform: Running several virtual machines simultaneously requires large amounts of memory. With compressed caching, guest OSs (Linux) will be able to run with lesser assigned memory thus allowing smoother operation for greater no. of virtual machines running simultaneously.
  • Low memory systems: Linux with graphical desktop managers like KDE, GNOME with OpenOffice on 128MB RAM system?
  • OLPC project: Systems with very constrained memory.

Deliverables

Set of modules for Linux kernel 2.6.x.

Available for download here .

Project Details

It creates a virtual block device which acts as swap disk. Pages swapped to this disk are compressed and stored in memory itself.

It is very simple to use and requires no kernel patching at all. Project help page gives more information.

Ultimate goal is to get this into mainline :)

Project Progress

It is currently very stable at least on x86 with good performance improvements under memory pressure. I will summarize test results on project home page soon.

References

TLSF Allocator: Two-Level Segregate Fit: Allocator currently used by compcache (http://rtportal.upv.es/rtmalloc/)

Rodrigo Castro: Adaptive Compressed Caching: Design and Implementation (http://linuxcompressed.sourceforge.net/files/docs/paper.pdf)

Scott F. Kaplan: Compressed Caching and Modern Virtual Memory Simulation (http://www.cs.amherst.edu/~sfkaplan/papers/sfkaplan-dissertation.ps.gz)

Irina Chihaia and Thomas Gross: Adaptive Main Memory Compression (http://www.lst.inf.ethz.ch/research/publications/publications/USENIX_2005/USENIX_2005.pdf)