I understand bitmap layout and pixel format subject pretty well, but getting an issue when working with png / jpeg images loaded through NSImage
– I can't figure out if what I get is the intended behaviour or a bug.
let nsImage:NSImage = NSImage(byReferencingURL: …)
let cgImage:CGImage = nsImage.CGImageForProposedRect(nil, context: nil, hints: nil)!
let bitmapInfo:CGBitmapInfo = CGImageGetBitmapInfo(cgImage)
Swift.print(bitmapInfo.contains(CGBitmapInfo.ByteOrderDefault)) // True
My kCGBitmapByteOrder32Host
is little endian, which implies that the pixel format is also little endian – BGRA in this case. But… png format is big endian by specification, and that's how the bytes are actually arranged in the data – opposite from what bitmap info tells me.
Does anybody knows what's going on? Surely the system somehow knows how do deal with this, since pngs are displayed correctly. Is there a bullet-proof way detecting pixel format of CGImage? Complete demo project is available at GitHub.
P. S. I'm copying raw pixel data via CFDataGetBytePtr
buffer into another library buffer, which is then gets processed and saved. In order to do so, I need to explicitly specify pixel format. Actual images I'm dealing with (any png / jpeg files that I've checked) display correctly, for example:
But bitmap info of the same images gives me incorrect endianness information, resulting in bitmap being handled as BGRA pixel format instead of actual RGBA, when I process it the result looks like this:
The resulting image demonstrates the colour swapping between red and blue pixels, if RGBA pixel format is specified explicitly, everything works out perfectly, but I need this detection to be automated.
P. P. S. Documentation briefly mentions that CGColorSpace
is another important variable that defines pixel format / byte order, but I found no mentions how to get it out of there.
Could you use NSBitmapFormat?
I wrote a class to source color schemes from images, and that's what I used to determine the bitmap format. Here's a snippet of how I used it:
Some years later and after testing my findings in production I can share them with good confidence, but hoping someone with theory knowledge will explain things better here? Good places to refresh memory:
Based on that you can use following extensions:
Note, that you should always pay attention to colour space – it directly affects how raw pixel data is stored.
CGColorSpace(name: CGColorSpace.sRGB)
is probably the safest one – it stores colours in plain format, for example, if you deal with red RGB it will be stored just like that (255, 0, 0) while device colour space will give you something like (235, 73, 53).To see this in practice drop above and the following into a playground. You'll need two one-pixel red images with alpha and without, this and this should work.
This will output the following. Pay attention how screen-captured image differs from ones loaded from disk.