Efficient and Compact Representations of Prefix Codes


Most of the attention in statistical compression is given to the space used by the compressed sequence, a problem completely solved with optimal prefix codes. However, in many applications, the storage space used to represent the prefix code itself can be an issue. In this paper we introduce and compare several techniques to store prefix codes. Let $N$ be the sequence length and $n$ be the alphabet size. Then a naive storage of an optimal prefix code uses $\mathcal{O}(n\log n)$ bits. Our first technique shows how to use $\mathcal{O}(n\log\log(N/n))$ bits to store the optimal prefix code. Then we introduce an approximate technique that, for any $0<\epsilon<1/2$, takes $\mathcal{O}(n \log \log (1 / \epsilon))$ bits to store a prefix code with average codeword length within an additive $\epsilon$ of the minimum. Finally, a second approximation takes, for any constant \(c > 1\), $\mathcal{O}(n^{1 / c} \log n)$ bits to store a prefix code with average codeword length at most $c$ times the minimum. In all cases, our data structures allow encoding and decoding of any symbol in $\mathcal{O}(1)$ time. We experimentally compare our new techniques with the state of the art, showing that we achieve 6--8-fold space reductions, at the price of a slower encoding (2.5--8 times slower) and decoding (12--24 times slower). The approximations further reduce this space and improve the time significantly, up to recovering the speed of classical implementations, for a moderate penalty in the average code length. As a byproduct, we compare various heuristic, approximate, and optimal algorithms to generate length-restricted codes, showing that the optimal ones are clearly superior and practical enough to be implemented.


All the datasets left below are fully described in the paper.