How to determine png dimensions based on file size

2019-02-18 17:14发布

问题:

If an app has business logic that says a 24-bit PNG can never exceed 250KB, is it possible to predict what the largest width & height that the image could be and still fit under the 250KB requirement?

Since there are a lot of variables with color-depth, alpha channels, etc... is it possible to know this? Or to get even close?

回答1:

It is possible, but it probably isn't useful. PNG's zlib compression has a maximum compression ratio of 1032:1 (for a long sequence of the same byte value). So 250 KB compressed would be (ignoring wrappers and whatnot) about 250 MB uncompressed. For a square image, this would be almost 10,000 x 10,000 pixels at three bytes per pixel.

Note that another answer here inexplicably assumes minimum compression which gives a minimum number of pixels, e.g. 500 x 333. Since the question asked for "the largest width & height that the image could be", that answer is not useful. Obviously 10,000 x 10,000 is larger than 500 x 333.

Update:

A precise calculation based on a minimal PNG file results in this maximum number of 24-bit pixels (three bytes per pixel as stored in the compressed data) as a function of the file size of n bytes:

floor(((n - 77) * 8 - 1) / 2) * 86 + 1

So for 250*1024 = 256,000 bytes, we get 88,037,427 pixels. For a square image, that would be about 9383 x 9383 pixels.



回答2:

After dealing with the same problem I have created a solution that works.

Assuming that compression is completely ineffective for the worst case scenario, each pixel will store 8 bytes of data, 2 bytes for each R, G, B & A. So a 100x100 px image will be a maximum of 80,000 bytes in size, plus some negligible meta data.

After making these simple calculations, I have made a number of experiments with mottled multi-colored photographs and I never get more than about a third of that size, about 30kb per 10k pixels.

Armed with this knowledge, I have composed a recursive function that downsizes the input png by 10% until the size is below the limit, and retaining the correct dimensions with the resulting image, I restored it on the destination objects. This resulted in best, though variable, quality, correct size, and a negligible additional load on the CPU (because the downsizing never happened in practice).

This png spec is what I relied on to make my assumptions: http://www.libpng.org/pub/png/spec/1.2/PNG-Chunks.html

You may also want to take a look at the Wikipedia article: https://en.wikipedia.org/wiki/Portable_Network_Graphics



回答3:

It's not possible, if you save a huge blank file as png, it would have a very small size, because of the png compression.

If you want to provide dimensions to your users you should change your bussiness logic to accept an image based on its dimensions instead of its filesize.



回答4:

You can predict the largest that a PNG file will be by assuming it is uncompressed. Multiply width*height*3 and add a bit for header overhead.

To get better, measure a large number of typical PNG files for your application and find the one with the largest ratio of actual file size to the prediction above. Use this ratio or a number slightly larger to estimate the size of any other image.

This still won't guarantee that the result will be small enough, you can only determine that by actually attempting to write out the encoded image. However it should be good enough for all but the most degenerate cases.

Edit: If it isn't clear, you can work backwards and get the image dimensions from the maximum file size. Assuming w and h are the maximum acceptable width and height, a is the aspect ratio of w/h, and r is the ratio of file size / image size discovered above:

w = sqrt((250K * a) / (r * 3))
h = w / a

So for example, if a was 1.5 and r was 0.5, your dimensions would be 500 x 333.