LessFS is a filesystem deduplication project. The aim is to reduce disk usage where filesystem blocks are identical by only storing 1 block and using pointers to the original block for copies. This method of storage is becoming popular in Enterprise solutions for reducing disk backups and minimising virtual machine storage in particular.
- Name: Duncan Innes
- Email: duncan AT innes DOT net
- Targeted release: ?
- Last updated: 2010-11-12
- Percentage of completion: 0%
Data deduplication is often used for backup purposes and for virtual machine image storage. lessfs can determine if data is redundant by calculating an unique (192 bit) tiger hash of each block of data that is written. When lessfs has determined that a block of data needs to be stored it first compresses the block with LZO or QUICKLZ compression. The combination of these two techniques results in a very high overall compression rate for many types of data. Multimedia files like mp3, avi or jpg files can not be compressed by lessfs when they are only stored once on the filesystem.
Benefit to Fedora
This will bring an as yet unavailable enterprise tool to Fedora. Storage is becoming the biggest consumer of energy in the datacentre. De-duplication will help bring that power and cost requirement down. Inclusion of LessFS (even as a technology preview) will improve the coverage of Fedora and help to push forward an open source method of de-duplication.
LessFS adds functionallity that allows de-duplicated file systems.
How To Test
No special hardware requirements.
A Package Review Request is currently sitting in Bugzilla (https://bugzilla.redhat.com/show_bug.cgi?id=530473) but appears to have stalled.
Once the package is installed, a filesystem can then be created.
Create a filesystem /data/orig as a normal partition. Create a filesystem /data/less as a de-duplicated fuse filesystem using LessFS.
Create a directory & file structure in /data/orig that uses multiple copies of a few large files. Renamed file copies in the same directory and same-name copies in different directories. Files should be multiple blocks in size for optimum testing. Data can be from /dev/random or similar to allow good LZ compression. Once the /data/orig filesystem is of a good size for testing (multiple Gb will be better, but not entirely necessary) copy all the data to /data/less.
An rsync should show that the /data/orig and /data/less filesystems are identical, but checking the /data/less directory will show less disk space usage.
In my view, this package is not aimed at filesystems requiring maximum read/write speed, but is more ideally suited to filesystems with low rate of change. Filesystems with high capacity requirements benefit the most.
De-duplication will be noticeable to target users by greatly reducing the disk space requirements for backups to disk and for virtual machine storage. Greater reductions are seen where many images/backups share a common data set.
None necessary - this is a new feature and does not change any current part of Fedora
- Filesystem for FUSE that allows for high performance inline data de-duplication using tokyocabinet for the database.