How can I extract the alpha channel of a UIImage or CGImageRef and convert it into a mask that I can use with CGImageMaskCreate?
For example:
Essentially, given any image, I don't care about the colors inside the image. All I want is to create a grayscale image that represents the alpha channel. This image can then be used to mask other images.
An example behavior of this is in the UIBarButtonItem when you supply it an icon image. According to the Apple docs it states:
The images displayed on the bar are derived from this image. If this image is too large to fit on the bar, it is scaled to fit. Typically, the size of a toolbar and navigation bar image is 20 x 20 points. The alpha values in the source image are used to create the images—opaque values are ignored.
The UIBarButtonItem takes any image and looks only at the alpha, not the colors of the image.
To color icons the way the bar button items do, you don't want the traditional mask, you want the inverse of a mask-- one where the opaque pixels in the original image take on your final coloring, rather than the other way around.
Here's one way to accomplish this. Take your original RBGA image, and process it by:
E.g.
Now you can use
finalMaskImage
as the mask inCGContextClipToMask
etc, or etc.I tried the code provided by quixoto but it didn't work for me so I changed it a little bit.
The problem was that drawing only the alpha channel wasn't working for me, so I did that manually by first obtaining the data of the original image and working on the alpha channel.
You can call that function like this
The solution by Ben Zotto is correct, but there is a way to do this with no math or local complexity by relying on
CGImage
to do the work for us.The following solution uses Swift (v3) to create a mask from an image by inverting the alpha channel of an existing image. Transparent pixels in the source image will become opaque, and partially transparent pixels will be inverted to be proportionally more or less transparent.
The only requirement for this solution is a
CGImage
base image. One can be obtained fromUIImage.cgImage
for a mostUIImage
s. If you're rendering the base image yourself in aCGContext
, useCGContext.makeImage()
to generate a newCGImage
.The code
That's it! The
mask
CGImage is now ready to used withcontext.clip(to: rect, mask: mask!)
.Demo
Here is my base image with "Mask Image" in opaque red on a transparent background:
To demonstrate what happens when running it through the above algorithm, here is an example which simply renders the resulting image over a green background.
Now we can use that image to mask any rendered content. Here's an example where we render a masked gradient on top of the green from the previous example.
(Note: You could also also swap the
CGImage
code to use Accelerate Framework'svImage
, possibly benefiting from the vector processing optimizations in that library. I haven't tried it.)