Index O'Stuff
Home Compression Arithmetic CodingBurrows-Wheeler Transform Huffman Coding LZSS Coding LZW Coding Run Length Encoding Misc. Programming School ProjectsThesis Project crypt(3) Source Hamming Codes Bit Manipulation Libraries Square Root Approximation Library Sort Library Trailing Space Trimmer and Tab Remover Command Line Option Parser Humor Dictionary O'Modern TermsThe Ten Commandments of C Style Other Stuff TOPS Success StoryFree Win32 Software External Links SPAN (Spay/Neuter Animal Network)Los Angeles Pet Memorial Park Mirrors
Pages at geocitiesPages at dipperstein.com Obligatory Links
Visits since I started counting: |
Burrows-Wheeler Transform Discussion and Implementationby Michael DippersteinAfter playing with: Huffman coding, LZSS, arithmetic coding, and run length encoding (RLE), you'd figure that I'd have enough. I thought so too. I briefly diverted my attention from compression with a venture into C++ implementations of my bit libraries, but compression curiosities came back to haunt me. I had long known about bzip2 and its better compression ratios than gzip. I even knew that the "b" is for "block", but I never really understood how it worked. Where do blocks come into play and how does that let improve on my compression ratios? It turns out that the blocks are constructs of the Burrows-Wheeler block-sorting transform. This page discusses the Burrows-Wheeler transform (BWT) and the related Move-To-Front coding (MTF). I have also provided links to my implementation of the transform and Move-To-Front. As with my compression implementations, my intent is to publish an easy to follow ANSI C implementation of the Burrows-Wheeler transform. Anyone familiar with ANSI C and BWT should be able to follow and learn from my implementation. There's a lot of room for improvement of speed, memory usage, and block size optimizations, but this project is about learning and sharing, not perfection. If you just want my source code, click here for information on downloading it. The rest of this page discusses BWT, Move-To-Front coding and the results of my efforts so far. Algorithm OverviewThe Burrows-Wheeler transform (BWT) is not actually a compression. In fact BWT requires that a small amount of additional information be stored with the transformed data so that the transformation may be reversed. This makes the transformed data larger than its original form. BWT is useful because it converts the data into a format that is generally more compressible by run length encoders and statistical encoders with order greater than 0. By additionally applying move-to-front coding, the data will be in a format which is generally more compressible by even zero order statistical encoders such as traditional implementations of Huffman coding or arithmetic coding. TransformThe Burrows-Wheeler transform is applied on blocks of input data (symbols). It is usually the case that larger blocks result in greater compressibility of the transformed data at the expense of time and system resources. One of the effects of BWT is to produce blocks of data with more and longer runs (strings of identical symbols) than those found in the original data. The increasing the number of runs and their lengths tends to improve a the compressibility of data. The first step of BWT is to read in a block of N symbols C0 .. CN-1. example: this is a test. If the block is thought of a cyclic buffer, N strings (rotations)
S0 .. SN-1 may be constructed such that: example: "this is a test." yields the following rotations: this is a test. .this is a test t.this is a tes st.this is a te est.this is a t test.this is a test.this is a a test.this is a test.this is s a test.this i is a test.this is a test.this s is a test.thi is is a test.th his is a test.t The first step of BWT is to lexicographically sort S0 .. SN-1. example: "this is a test." yields the following sorted rotations: a test.this is is a test.this test.this is a .this is a test a test.this is est.this is a t his is a test.t is a test.this is is a test.th s a test.this i s is a test.thi st.this is a te t.this is a tes test.this is a this is a test. Once the rotations have been sorted, the transform should output a string, L, consisting of the last character in each of the rotations in their sorted order along with I, the sorted row containing S0. example: "this is a test." produces: L = "ssat tt hiies .", I = 14 The input went from a block of 0 runs to a block of 3 runs of length 2. (not bad for 15 symbols). This transformation works on data blocks where there are symbols that are frequently preceded by the same symbol. In my example ' ' is twice preceded by 's'. Believe it or not the string of last characters (L) along with the row S0 appears in after sorting (I) is enough information to reconstruct the original block (S0). Reverse TransformReversing BWT is a little more complicated than the initial transform. The reversal process starts with a string L composed of last characters of sorted rotations (S0 .. SN-1) and the position of the S0 contribution to L (I). The reversal process must yield the original block (S0). The approach to reversing the transform wasn't the least bit obvious to me. If it's obvious to you, feel free to skip this section. It turns out there are a few ways to reverse the transform. The method discussed here is the one that I ended up implementing. If L is composed of the symbols V0 .. VN-1, the transformed string may be parsed to determine the following pieces of additional information:
example: L = "ssat tt hiies ." produces the following:
Now we have:
Amazingly this is all we need. Remember our string S0 = C0 .. CN-1. From 1 and 2, we know where CN-1 is within L, therefore we know the value of CN-1. Next try to find the symbol CN-2 in L. CN-2 is the is the last symbol in SN-1, so where is contribution from SN-1 in L? It just so happens that SN-1 is the string that starts with CN-1 and we just found CN-1. The contributions from strings starting with symbols less than CN-1 will occur earlier in L than the contribution from SN-1. Therefore CN-1 must occur after the contributions of those strings. L may also contain contributions from strings starting with symbols equal to CN-1. Some of these strings occur before SN-1 and others after. From 3, we know the number of symbols identical to CN-1 that occur in L before CN-1. Since L contains rotations of the same block, we know that these same number of contributions will occur before the contribution from SN-1. From 4, we know how many symbols there are that are less than CN-1. So we also know the number of strings starting with lesser symbols that made contributions before SN-1. No other strings may contribute to L before SN-1, therefore the contribution from SN-1 occurs in the position obtained by summing the values from 3 and 4. As stated earlier the contribution from SN-1 is CN-2. Use the same technique to find CN-3, CN-4, .., C0. example: Using tables 1 and
2 reverse BWT where L =
"ssat tt hiies ." and I = 14. So What Do I Do with BWT?Now that you understand BWT, you can publish web pages about it. Okay, that's not what you're looking for. As I stated earlier BWT tends to transform blocks of symbols into blocks with more runs. This makes BWT useful for improving compression of run length encoders. The method for compression would be:
The method for decompression would be:
Typically compression performed by run length encoders are far from optimal. Even after BWT, run length encoders don't perform as well as statistical encoders. BWT doesn't do anything to change the frequency distribution of symbols, so BWT alone won't do anything to make data more compressible by order zero statistical algorithms. That's where Move-To-Front Coding comes into play. Move-To-Front CodingMove-To-Front (MTF) coding is a technique that encodes streams of symbols based on an adapting code. Imagine that symbols are encoded by their position in a list of every symbol in the alphabet. Initially the list is in a lexicographical (or some predetermined) order. Once a symbol in a stream is encoded, that symbol is moved from it's current position to the front of the list and its former predecessors are moved one position back. example: Given an alphabet of ' ','.','a','e','h','i','s','t' encode "ssat tt hiies .":
Notice that with MTF coding, the second and successive characters in any run are converted to 0 ("ss" became 6,0, "tt" became 1,0, and "ii" became 7,0). This works well with BWT, since it produces blocks with a lot of runs. You should also notice that there is nothing about MTF coding that requires prior use of BWT. Move-To-Front DecodingUnlike the reverse BWT, Move-To-Front (MTF) decoding is was fairly obvious to me. It's very much like the encoding process, but a little less time intensive. This time the position of a symbol in the list of every symbol in the alphabet is used to decode a symbol. Like the encode process, the list starts out in a lexicographical (or some predetermined) order. The encoded data indicates the position of the decoded symbol. After decoding the symbol, move it to the front of the list. example: Given an alphabet of ' ','.','a','e','h','i','s','t' decode 6,0,3,7,3,1,0,1,6,7,0,7,6,4,7:
So What Do I Do with MTF?Now that you understand MTF, you can publish web pages about it too. Okay, it was a bad joke the first time. As I stated earlier MTF tends to increase the frequency of low value symbols within a block. This makes MTF useful for improving compression of statistical encoders such as Huffman coding or arithmetic coding. The method for compression would be:
The method for decompression would be:
ImplementationI was extremely uncreative here. My implementation comes directly from "A Block-sorting Lossless Data Compression Algorithm" by Michael Burrows and David Wheeler. The source code includes comments which reference sections of their paper. My hope is that anybody reading their paper will be able to follow along with my source. This section addresses issues that were either discussed by Burrows and Wheeler or required to implement their transform on real data. Block SizeBWT is a block sorting algorithm, but the transform isn't dependant upon the size of the block. BWT can be used on blocks of any size. When I was debugging my code I used 16 byte blocks, they are currently 4096 bytes. I chose 4096 bytes because it can be handled by most modern computers. The larger the block, the more opportunity you have for runs and repeated patterns. Burrows and Wheeler demonstrate improved compression ratios all the way to 103 megabyte blocks for one benchmark file. The obvious downsides to large blocks are that blocks must be stored somewhere, so you need enough memory to hold the block and it's rotations. An N byte block will have N rotations. The rotations are also sorted, and sorting is not a linear operation; larger blocks will also take more time. End Of BlockIn the ideal world any encoded data will be evenly divisible by whatever block size the transform happens to be using. Unfortunately, I don't live in that world. More often than not, the last block is a partial block. There are a few ways to deal with this that I can think of:
When I started playing with BWT I thought that I might want to dynamically adjust the block size, so I prefixed each block with its size. It also allowed for some error checking. If the file ended before I read all the data in a block something bad happened. The dynamic block size adjustment never came into being, so the code I'm releasing actually takes the more compact approach of using the EOF as an indicator of a partial block. In order to handle a partial block at the end of a file, I made one minor change to the BWT algorithm. Burrows and Wheeler output the last characters of the sorted rotations (L), followed by the index of the contribution of the unrotated block (I). If L is written before I, an attempt to read all the characters of a full size block will result in reading some or all of I before the EOF is reached. L can always be adjusted by removing the data that belongs to I, but there's an easier approach. My implementation writes I then L. If we hit an EOF and L is a partial block we don't need to do any magic to get I; we already have it. Block RotationsBurrows and Wheeler discuss a conceptual N × N matrix containing rotations of N long blocks, but they leave implementation of the concept to the implementor. I'm pretty sure they don't expect anybody to use an actual N × N matrix. Since rotations are just strings starting at index i in the block and wrapping around until, index (i - 1) mod N, I just refer to my rotations by their starting indices. My matrix of rotations is just an array of indices into the unrotated block. Sorting of RotationsIn order to keep things simple, I use the qsort() function to sort rotations. The efficiency of qsort() really depends upon your compiler. You'd think it was just a text book quick sort, but it isn't always. I think gcc uses a merge sort. If you want to play with different sort functions, my sort library includes implementations of common sort algorithms which are plug-in replacements for qsort(). It turns out that the sorting of rotations is one of the most time consuming steps in the encode process. Optimizing the sorting process seems to be a pretty hot topic, with several published papers. Burrows and Wheeler suggest optimizing the process by using a radix sort to sort all rotations by their first two characters. Then using quick sort is use to sort within the groups of rotations that match at their first two characters. Versions 0.4 and later of my software introduce this optimization. It should be pointed out that using a radix sort to sort rotations by their first two characters prior using quick sort isn't always more optimal than using quicksort alone. Suppose an entire block is just one repeated character. All rotations in that block start with the same two characters, so a radix sort of such a block will not reduce the amount of comparisons performed by quick sort. MTF Encode/DecodeMy implementation of MTF encode/decode use brute force. Apparently there are efficient implementations of MTF that are well known. Not by me, but I didn't go looking for them either. One possible area to look for optimization is in the search for a symbol's code during the encode process. I do a sequential search. It will find the code for a symbol in a run on the first try. The average search time for other symbols will be on the order of half the alphabet length. Other searches might have better average cases. Another potential from optimization is the Move-To-Front operation itself. My code is stored in an array, and I move each character over one at a time. Linked lists will make the move faster, but then you can't use the array index as the code. PortabilityAll the source code that I have provided is written in strict ANSI C. I would expect it to build correctly on any machine with an ANSI C compiler. I have tested the code compiled with gcc on Linux and mingw on Windows 98. My implementation defines a value BLOCK_SIZE as 4096. This value may easily be changed. Increasing BLOCK_SIZE generally improves the compressibility of transformed data at the expense of memory and computation time. If BLOCK_SIZE is made larger than INT_MAX the preprocessor will issue and error which may be corrected by changing several ints to longs. If BLOCK_SIZE is made larger than the maximum size_t, I would expect calls to fread() and fwrite() to produce errors. Whether the errors are compile-time or run-time is probably dependant on the compiler. Further InformationMichael Burrows and David Wheeler document their algorithm in "A Block-sorting Lossless Data Compression Algorithm". I highly recommend that anybody interested in studying their algorithm make a point of reading this document. Further information about BWT and other block-sorting algorithms may be found at http://www.datacompression.info/BWT.shtml. The site includes links to discussions of the algorithm, studies, libraries, and tools using BWT. Actual SoftwareI am releasing my implementation of the Burrows-Wheeler Transform and Move-To-Front coding under the LGPL . At this time I have four revisions of the code to offer. As I add enhancements or fix bugs, I will post them here as newer versions. The larger the version number, the newer the version. I'm retaining the older versions for historical reasons. I recommend that most people download the newest version unless there is a compelling reason to do otherwise. Each version is contained in its own zipped archive which includes the source files and brief instructions for building an executable. None of the archives contain executable programs. A copy of the archives may be obtained by clicking on the links below.
If you have any further questions or comments, you may contact me by e-mail. My e-mail address is: mdipper@alumni.engr.ucsb.edu For more information on the Burrows-Wheeler transform or other compression related algorithms, visit DataCompression.info. |
Home
Last updated on January 4, 2006