I want to load 4-channel texture data from a file in iOS, so I consider the texture as a (continuous) map
[0,1]x[0,1] -> [0,1]x[0,1]x[0,1]x[0,1]
If I use the fileformat .png
, XCode/iOS consider the file as an image, and so multiplies each component rgb
with a
(premultiplied alpha), corrupting my data. How should I solve this? Examples may be
- use two textures with components
rgb
(3-channel) - postdivide alpha
- use another file format
Of these, I consider the best solution to be to use another file format. The GL-compressed file format (PVRTC?) is not Apple-platform independent and seems to be of low resolution (4 bits) (reference).
EDIT:
If my own answer below is true, it is not possible to get the 4 channel data of png's in iOS. Since OpenGL is about creating images rather than presenting images, it should be possible to load 4-channel data in some way. png is a fileformat for images (and compression depends on all 4 channels but compression of one channel is independent of the other channels), so one may argue that I should use another file format. So which other compressed file formats should I use, which is easy to read/integrated in iOS?
UPDATE: "combinatorial" mentioned a way to load 4-channel non-premultiplied textures, so I had to give him the correct answer. However, that solution had some restrictions I didn't like. My next question is then "Access raw 4-channel data from png files in iOS" :)
I think it is a bad library design not making it possible to read 4 channel png data. I don't like systems trying to be smarter than myself.
As you considered PVRTC then using GLKit could be an option. This includes GLKTextureLoader which allows you to load textures without pre-multiplying alpha. Using for example:
and passing an options dictionary containing:
You can simply request that Xcode not 'compress' your PNG files. Click your project in the top left, select the 'Build Settings', find 'Compress PNG Files' and set the option to 'No'.
As to your other options, postdividing isn't a bad solution but obviously you'll lose overall precision and I believe both TIFF and BMP are also supported. PVRTC is PowerVR specific so it's not Apple-specific but it's also not entirely platform independent and is specifically designed to be a lossy compression that's trivial to uncompress with little input on the GPU. You'd generally increase your texture resolution to ameliorate for the low bit per pixel count.
You should use libpng to load PNG without premultiplied colors.
It is written in C and should compile for iOS.
I've had similar problems with Android and also had to use third-party library to load PNG files with non-premultiplied colors.
Not 100% what you want, but I got around the problem using this approach: Put the alpha channel into a separate black & white png and save the original png without alpha. So space taken is about the same. Then in my texture loader load both images and combine into one texture.
I know this is a only a workaround, but at least it gives the correct result. And yes, it is very annoying, that iOS does not allow you to load textures from PNG without premultiplied alpha.
This is an attempt to answer my own question.
It is not possible to load non-premultiplied .png files.
The option
kCGImageAlphaLast
is a valid option, but does not give a valid combination forCGBitmapContextCreate
(reference). It is however a valid option forCGImageRef
's.What the build setting
COMPRESS_PNG_FILES
in XCode mentioned above does, is to convert .png files into some other file format and also multiply the channelsrgb
witha
(reference). I was hoping that disabling this option would make it possible to reach the channel data in my actual .png files. But I am not sure if this is possible. The following example is an attempt to access the .png data at low level, as aCGImageRef
:which gives "kCGImageAlphaPremultipliedLast" with
COMPRESS_PNG_FILES
disabled. So I think iOS always convert .png files, even at run-time.