Ok using coregraphics, I'm building up an image which will later be used in a CGContextClipToMask
operation. It looks something like the following:
UIImage *eyes = [UIImage imageNamed:@"eyes"];
UIImage *mouth = [UIImage imageNamed:@"mouth"];
UIGraphicsBeginImageContext(CGSizeMake(150, 150));
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 0, 0, 0, 1);
CGContextFillRect(context, bounds);
[eyes drawInRect:bounds blendMode:kCGBlendModeMultiply alpha:1];
[mouth drawInRect:bounds blendMode:kCGBlendModeMultiply alpha:1];
// how can i now get a CGImageRef here to use in a masking operation?
UIGraphicsEndImageContext();
Now, as you can see by the comment, I'm wondering how I'm actually going to USE the image I've built up. The reason why I'm using core graphics here and not just building up a UIImage is that the transparency I'm creating is very important. If I just grab a UIImage from the context, when it's used as a mask, it will just apply to everything... Further to the point, will I have any problems using a partially-transparent mask using this method?
You can call the
UIGraphicsGetImageFromCurrentImageContext
function, which will return a UIImage object. You can hold onto and use the UIImage, or ask it for its CGImage.