How many times can a file be compressed?

2019-01-08 09:41发布

问题:

I was thinking about compression, and it seems like there would have to be some sort of limit to the compression that could be applied to it, otherwise it'd be a single byte.

So my question is, how many times can I compress a file before:

  • It does not get any smaller?
  • The file becomes corrupt?

Are these two points the same or different?

Where does the point of diminishing returns appear?

How can these points be found?

I'm not talking about any specific algorithm or particular file, just in general.

回答1:

For lossless compression, the only way you can know how many times you can gain by recompressing a file is by trying. It's going to depend on the compression algorithm and the file you're compressing.

Two files can never compress to the same output, so you can't go down to one byte. How could one byte represent all the files you could decompress to?

The reason that the second compression sometimes works is that a compression algorithm can't do omniscient perfect compression. There's a trade-off between the work it has to do and the time it takes to do it. Your file is being changed from all data to a combination of data about your data and the data itself.

Example

Take run-length encoding (probably the simplest useful compression) as an example.

04 04 04 04 43 43 43 43 51 52 11 bytes

That series of bytes could be compressed as:

[4] 04 [4] 43 [-2] 51 52 7 bytes (I'm putting meta data in brackets)

Where the positive number in brackets is a repeat count and the negative number in brackets is a command to emit the next -n characters as they are found.

In this case we could try one more compression:

[3] 04 [-4] 43 fe 51 52 7 bytes (fe is your -2 seen as two's complement data)

We gained nothing, and we'll start growing on the next iteration:

[-7] 03 04 fc 43 fe 51 52 8 bytes

We'll grow by one byte per iteration for a while, but it will actually get worse. One byte can only hold negative numbers to -128. We'll start growing by two bytes when the file surpasses 128 bytes in length. The growth will get still worse as the file gets bigger.

There's a headwind blowing against the compression program--the meta data. And also, for real compressors, the header tacked on to the beginning of the file. That means that eventually the file will start growing with each additional compression.


RLE is a starting point. If you want to learn more, look at LZ77 (which looks back into the file to find patterns) and LZ78 (which builds a dictionary). Compressors like zip often try multiple algorithms and use the best one.

Here are some cases I can think of where multiple compression has worked.

  1. I worked at an Amiga magazine that shipped with a disk. Naturally, we packed the disk to the gills. One of the tools we used let you pack an executable so that when it was run, it decompressed and ran itself. Because the decompression algorithm had to be in every executable, it had to be small and simple. We often got extra gains by compressing twice. The decompression was done in RAM. Since reading a floppy was slow, we often got a speed increase as well!
  2. Microsoft supported RLE compression on bmp files. Also, many word processors did RLE encoding. RLE files are almost always significantly compressible by a better compressor.
  3. A lot of the games I worked on used a small, fast LZ77 decompressor. If you compress a large rectangle of pixels (especially if it has a lot of background color, or if it's an animation), you can very often compress twice with good results. (The reason? You only have so many bits to specify the lookback distance and the length, So a single large repeated pattern is encoded in several pieces, and those pieces are highly compressible.)


回答2:

Generally the limit is one compression. Some algorithms results in a higher compression ratio, and using a poor algorithm followed by a good algorithm will often result in improvements. But using the good algorithm in the first place is the proper thing to do.

There is a theoretical limit to how much a given set of data can be compressed. To learn more about this you will have to study information theory.



回答3:

In general for most algorithms, compressing more than once isn't useful. There's a special case though.

If you have a large number of duplicate files, the zip format will zip each independently, and you can then zip the first zip file to remove duplicate zip information. Specifically, for 7 identical Excel files sized at 108kb, zipping them with 7-zip results in a 120kb archive. Zipping again results in an 18kb archive. Going past that you get diminishing returns.



回答4:

Suppose we have a file N bits long, and we want to compress it losslessly, so that we can recover the original file. There are 2^N possible files N bits long, and so our compression algorithm has to change one of these files to one of 2^N possible others. However, we can't express 2^N different files in less than N bits.

Therefore, if we can take some files and compress them, we have to have some files that length under compression, to balance out the ones that shorten.

This means that a compression algorithm can only compress certain files, and it actually has to lengthen some. This means that, on the average, compressing a random file can't shorten it, but might lengthen it.

Practical compression algorithms work because we don't usually use random files. Most of the files we use have some sort of structure or other properties, whether they're text or program executables or meaningful images. By using a good compression algorithm, we can dramatically shorten files of the types we normally use.

However, the compressed file is not one of those types. If the compression algorithm is good, most of the structure and redundancy have been squeezed out, and what's left looks pretty much like randomness.

No compression algorithm, as we've seen, can effectively compress a random file, and that applies to a random-looking file also. Therefore, trying to re-compress a compressed file won't shorten it significantly, and might well lengthen it some.

So, the normal number of times a compression algorithm can be profitably run is one.

Corruption only happens when we're talking about lossy compression. For example, you can't necessarily recover an image precisely from a JPEG file. This means that a JPEG compressor can reliably shorten an image file, but only at the cost of not being able to recover it exactly. We're often willing to do this for images, but not for text, and particularly not executable files.

In this case, there is no stage at which corruption begins. It starts when you begin to compress it, and gets worse as you compress it more. That's why good image-processing programs let you specify how much compression you want when you make a JPEG: so you can balance quality of image against file size. You find the stopping point by considering the cost of file size (which is more important for net connections than storage, in general) versus the cost of reduced quality. There's no obvious right answer.



回答5:

Usually compressing once is good enough if the algorithm is good.
In fact, compressing multiple times could lead to an increase in the size

Your two points are different.

  • Compression done repeatedly and achieving no improvement in size reduction
    is an expected theoretical condition
  • Repeated compression causing corruption
    is likely to be an error in the implementation (or maybe the algorithm itself)

Now lets look at some exceptions or variations,

  • Encryption may be applied repeatedly without reduction in size
    (in fact at times increase in size) for the purpose of increased security
  • Image, video or audio files increasingly compressed
    will loose data (effectively be 'corrupted' in a sense)


回答6:

You can compress a file as many times as you like. But for most compression algorithms the resulting compression from the second time on will be negligible.



回答7:

Compression (I'm thinking lossless) basically means expressing something more concisely. For example

111111111111111

could be more consisely expressed as

15 X '1'

This is called run-length encoding. Another method that a computer can use is to find a pattern that is regularly repeated in a file.

There is clearly a limit to how much these techniques can be used, for example run-length encoding is not going to be effect on

15 X '1'

since there are no repeating patterns. Similarly if the pattern replacement methods converts long patterns to 3 char ones, reapplying it will have little effect, because the only remaining repeating patterns will be 3-length or shorter. Generally applying compression to a already compressed file makes it slightly bigger, because of various overheads. Applying good compression to a poorly compressed file is usually less effective than applying just the good compression.



回答8:

How many times can I compress a file before it does not get any smaller?

In general, not even one. Whatever compression algorithm you use, there must always exists a file that does not get compressed at all, otherwise you could always compress repeatedly until you reach 1 byte, by your same argument.

How many times can I compress a file before it becomes corrupt?

If the program you use to compress the file does its job, the file will never corrupt (of course I am thinking to lossless compression).



回答9:

Here is the ultimate compression algorithm (in Python) which by repeated use will compress any string of digits down to size 0 (it's left as an exercise to the reader how to apply this to a string of bytes).


def compress(digitString):
    if digitString=="":
        raise "already as small as possible"
    currentLen=len(digitString)
    if digitString=="0"*currentLen:
        return "9"*(currentLen-1)
    n=str(long(digitString)-1); #convert to number and decrement
    newLen=len(n);
    return ("0"*(currentLen-newLen))+n; # add zeros to keep same length

#test it
x="12";
while not x=="":
    print x;
    x=compress(x)

The program outputs 12 11 10 09 08 07 06 05 04 03 02 01 00 9 8 7 6 5 4 3 2 1 0 then empty string. It doesn't compress the string at each pass but it will with enough passes compress any digit string down to a zero length string. Make sure you write down how many times you send it through the compressor otherwise you won't be able to get it back.



回答10:

You can compress infinite times. However, the second and further compressions usually will only produce a file larger than the previous one. So there is no point in compressing more than once.



回答11:

It is a very good question. You can view to file from different point of view. Maybe you know a priori that this file contain arithmetic series. Lets view to it as datastream of "bytes", "symbols", or "samples".

Some answers can give to you "information theory" and "mathematical statistics" Please check monography of that researchers for full-deep understanding:

A. Kolmogorov

S. Kullback

С. Shannon

N. Wiener

One of the main concept in information theory is entropy. If you have a stream of "bytes"....Entropy of that bytes doesn't depend on values of your "bytes", or "samples"... If was defined only by frequencies with which bytes retrive different values. Maximum entropy has place to be for full random datastream. Minimum entropy, which equal to zero, has place to be for case when your "bytes" has identical value.

It does not get any smaller?

So the entropy is minimum number of bits per your "byte", which you need to use when writing information to the disk. Of course it is so if you use god's algorithm. Real life compression lossless heuristic algorithms are not so.

The file becomes corrupt?

I dont understand sense of the question. You can write no bits to the disk and you will write a corrupted file to the disk with size equal to 0 bits. Of course it is corrupted, but his size is zero bits.



回答12:

Example of a more advanced compression technique using "a double table, or cross matrix" Also elimiates extrenous unnessacry symbols in algorithm

[PREVIOUS EXAMPLE] Take run-length encoding (probably the simplest useful compression) as an example.

04 04 04 04 43 43 43 43 51 52 11 bytes

That series of bytes could be compressed as:

[4] 04 [4] 43 [-2] 51 52 7 bytes (I'm putting meta data in brackets)

[TURNS INTO] 04.43.51.52 VALUES 4.4.**-2 COMPRESSION

Further Compression Using Additonal Symbols as substitute values

04.A.B.C VALUES 4.4.**-2 COMPRESSION



回答13:

In theory, we will never know, it is a never-ending thing:

In computer science and mathematics, the term full employment theorem has been used to refer to a theorem showing that no algorithm can optimally perform a particular task done by some class of professionals. The name arises because such a theorem ensures that there is endless scope to keep discovering new techniques to improve the way at least some specific task is done. For example, the full employment theorem for compiler writers states that there is no such thing as a provably perfect size-optimizing compiler, as such a proof for the compiler would have to detect non-terminating computations and reduce them to a one-instruction infinite loop. Thus, the existence of a provably perfect size-optimizing compiler would imply a solution to the halting problem, which cannot exist, making the proof itself an undecidable problem.

(source)



回答14:

It all depends on the algorithm. In other words the question can be how many times a file can be compressed using this algorithm first, then this one next...