Specifically, what programs are out there and what has the highest compression ratio? I tried Googling it, but it seems experience would trump search results, so I ask.
相关问题
- iOS (objective-c) compression_decode_buffer() retu
- Unity - Get Random Color at Spawning
- How to generate a random number, then display it o
- how to randomly loop over an array (shuffle) in ba
- How to fill an array with random numbers from 0 to
相关文章
- why 48 bit seed in util Random class?
- Need help generating discrete random numbers from
- c# saving very large bitmaps as jpegs (or any othe
- Get random records with Doctrine
- Looking for a fast hash-function
- Oracle random row from table
- Generate a Random Number between 2 values to 2 dec
- How to extract zip file using dotnet framework 4.0
If file sizes could be specified accurate to the bit, for any file size N, there would be precisely 2^(N+1)-1 possible files of N bits or smaller. In order for a file of size X to be mapped to some smaller size Y, some file of size Y or smaller must be mapped to a file of size X or larger. The only way lossless compression can work is if some possible files can be identified as being more probable than others; in that scenario, the likely files will be shrunk and the unlikely ones will grow.
As a simple example, suppose that one wishes to store losslessly a file in which the bits are random and independent, but instead of 50% of the bits being set, only 33% are. One could compress such a file by taking each pair of bits and writing "0" if both bits were clear, "10" if the first bit was set and the second one not, "110" if the second was set and the first not, or "111" if both bits were set. The effect would be that each pair of bits would become one bit 44% of the time, two bits 22% of the time, and three bits 33% of the time. While some strings of data would grow, others would shrink; the pairs that shrank would--if the probability distribution was as expected--outnumber those that grow (4/9 files would shrink by a bit, 2/9 would stay the same, and 3/9 would grow, so pairs would on average shrink by 1/9 bit, and files would on average shrink by 1/18 [since the 1/9 figure was bits per pair]).
Note that if the bits actually had a 50% distribution, then only 25% of pairs would become one bit, 25% would stay two bits, and 50% would become three bits. Consequently, 25% of bits would shrink and 50% would grow, so pairs on average would grow by 25%, and files would grow by 12.5%. The break-even point would be about 38.2% of bits being set (two minus the golden mean), which would yield 38.2% of bit pairs shrinking and the same percentage growing.
The file archiver 7z uses the LZMA (Lempel Ziv Markow Algorithm) which is a young compression algorithm which has currently one of the best compression ratio (see the page Linux Compression Comparison).
Another advantages beside the high compression rate:
There is no one universally best compression algorithm. Different algorithms have been invented to handle different data.
For example, JPEG compression allows you to compress images quite a lot because it doesn't matter too much if the red in your image is 0xFF or 0xFE (usually). However, if you tried to compress a text document, changes like this would be disastrous.
Also, even between two compression algorithms designed to work with the same kind of data, your results will vary depending on your data.
Example: Sometimes using a gzip tarball is smaller, and sometimes using a bzip tarball is smaller.
Lastly, for truly random data of sufficient length, your data will likely have almost the same size as (or even larger than) the original data.