I am working on one iPhone App wherein I need to make a portion of the image transparent by setting its alpha level to 0 as the user moves around his finger on the image. Basically if you happen to know the app store application iSteam, user should be able to move his finger around on a top image which will make the background image transparent.
Currently I am using two UIImageView. One that holds the background image and the other on top of it which holds a darker image. Now user should be able to draw random curves on this darker image which will make the part of the background image appear on top. I am not able to figure out how should go about making the top image transparent which the top most of two UIImageView holds.
Any idea on this? Also what should I use for this? Quartz or Open GL. I am a newbie to the iPhone App Dev and have absolutely no idea about these APIs so some guidance from the experts will surely help me getting ahead with iPhone SDK Development.
The UIImageView has a layer which you can refer to as its layer
and talk to when you've linked your project to QuartzCore. As the user moves a finger, clip a clear shape in an opaque-color-filled graphics context the same size as the UIImageView, turn that into a CGImageRef, set that as a CALayer's contents
(again this CALayer needs to be the same size as the UIImageView), and set that layer as the UIImageView's layer.mask
. Wherever the mask is clear, that punches a transparent hole in the layer, which means the view, which means the image the UIImageView is showing. (If that doesn't work, because the UIImageView doesn't like your interfering with its layer, you can use a superview of the UIImageView instead.)
EDIT (next day) - Here's sample code for a layer's delegate that punches a circular hole in the center:
-(void)drawLayer:(CALayer *)layer inContext:(CGContextRef)c {
CGRect r = CGContextGetClipBoundingBox(c);
CGRect r2 = CGRectInset(r, r.size.width/2.0 - 10, r.size.height/2.0 - 10);
UIImage* maskim;
{
UIGraphicsBeginImageContextWithOptions(r.size, NO, 0);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextAddEllipseInRect(c, r2);
CGContextAddRect(c, r);
CGContextEOClip(c);
CGContextSetFillColorWithColor(c, [UIColor blackColor].CGColor);
CGContextFillRect(c, r);
maskim = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
CALayer* mask = [CALayer layer];
mask.frame = r;
mask.contents = (id)maskim.CGImage;
layer.mask = mask;
}
So, if that layer is a view's layer, and if the UIImageView is that view's subview, a hole is punched in the UIImageView.
Here's a screen shot of the result: