Which CGImageAlphaInfo should we use?

2019-01-18 05:41发布

The Quartz 2D programming guide defines the availability of the various alpha storage modes:

enter image description here

Which ones should we use for RGB contexts, and why?

For non-opaque contexts, kCGImageAlphaPremultipliedFirst or kCGImageAlphaPremultipliedLast?

For opaque contexts, kCGImageAlphaNoneSkipFirst or kCGImageAlphaNoneSkipLast?

Does the choice of value affect performance?

Typically, I see kCGImageAlphaPremultipliedFirst for non-opaque and kCGImageAlphaNoneSkipFirst for opaque contexts. Some state that these perform better but I haven't seen any hard evidence or documentation about this.

A quick GitHub search shows that developers favor kCGImageAlphaPremultipliedFirst over kCGImageAlphaPremultipliedLast and kCGImageAlphaNoneSkipLast over kCGImageAlphaNoneSkipFirst. Sadly, this is little more than anecdotal evidence.

4条回答
该账号已被封号
2楼-- · 2019-01-18 05:46

Confirmed by an Apple engineer at WWDC 2014 that we should use kCGImageAlphaPremultipliedFirst or kCGImageAlphaNoneSkipFirst, and that it does affect performance.

查看更多
贼婆χ
3楼-- · 2019-01-18 05:55

The most universally used is RGBA format (True colour format) where the Alpha byte is located at the last byte describing the pixel. - kCGImageAlphaPremultipliedLast (32 bits). Not all formats are as supported universally by all devices.just an observation but all the png and jpeg images that I processed downloaded from the web are all RGBA (or turn into that when I convert PNG to UIImage) - I've never come across an ARGB formatted file in the wild, although I know it is possible.

The different formats affect the file size of the image and the colour quality (in case you didn't know) and the image quality, the 8 bit formats are black and white (grey scale) a discussion of all these is found here: http://en.wikipedia.org/wiki/Color_depth

查看更多
Melony?
4楼-- · 2019-01-18 05:56

I am using kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big and it works great.

查看更多
Luminary・发光体
5楼-- · 2019-01-18 05:58

For best performance, your bytes per row and data should be aligned to multiples of 16 bytes.

bits per component: number of bits used for each color component. For 16 bit RGBA, 16/4 components = 4.

bits per pixel: at least bits per component * number of components

bytes per row: ((bits per component * number of components + 7)/8 ) * width

From CGBitmapContext.h:

The number of bits for each component of a pixel is specified by 'bitsPerComponent'. The number of bytes per pixel is equal to '(bitsPerComponent * number of components + 7)/8'. Each row of the bitmap consists of bytesPerRow' bytes, which must be at leastwidth * bytes per pixel' bytes; in addition, `bytesPerRow' must be an integer multiple of the number of bytes per pixel.

Once you have the bytes per row given your desired pixel format and color space, if it's divisible by 16, you should be in good shape. If you are NOT correctly aligned, Quartz will perform some of these optimizations for you, which will incur overhead. For extra points you can also attempt to optimize the bytes per row to fit in a line of the L2 cache on the architecture you're targeting.

This is covered well in the Quartz 2D Programming Guide as well as the documentation for CGImage. There is also a question that may be relevant.

All that said, it's best to try different things and profile in Instruments.

查看更多
登录 后发表回答