After applying a 3d transform to an UIImageView.layer, I need to save the resulting "view" as a new UIImage... Seemed like a simple task at first :-) but no luck so far, and searching hasn't turned up any clues :-( so I'm hoping someone will be kind enough to point me in the right direction.
A very simple iPhone project is available here.
Thanks.
- (void)transformImage {
float degrees = 12.0;
float zDistance = 250;
CATransform3D transform3D = CATransform3DIdentity;
transform3D.m34 = 1.0 / zDistance; // the m34 cell of the matrix controls perspective, and zDistance affects the "sharpness" of the transform
transform3D = CATransform3DRotate(transform3D, DEGREES_TO_RADIANS(degrees), 1, 0, 0); // perspective transform on y-axis
imageView.layer.transform = transform3D;
}
/* FAIL : capturing layer contents doesn't get the transformed image -- just the original
CGImageRef newImageRef = (CGImageRef)imageView.layer.contents;
UIImage *image = [UIImage imageWithCGImage:newImageRef];
*/
/* FAIL : docs for renderInContext states that it does not render 3D transforms
UIGraphicsBeginImageContext(imageView.image.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
*/
//
// header
//
#import <QuartzCore/QuartzCore.h>
#define DEGREES_TO_RADIANS(x) x * M_PI / 180
UIImageView *imageView;
@property (nonatomic, retain) IBOutlet UIImageView *imageView;
//
// code
//
@synthesize imageView;
- (void)transformImage {
float degrees = 12.0;
float zDistance = 250;
CATransform3D transform3D = CATransform3DIdentity;
transform3D.m34 = 1.0 / zDistance; // the m34 cell of the matrix controls perspective, and zDistance affects the "sharpness" of the transform
transform3D = CATransform3DRotate(transform3D, DEGREES_TO_RADIANS(degrees), 1, 0, 0); // perspective transform on y-axis
imageView.layer.transform = transform3D;
}
- (UIImage *)captureView:(UIImageView *)view {
UIGraphicsBeginImageContext(view.frame.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (void)imageSavedToPhotosAlbum:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo {
NSString *title = @"Save to Photo Album";
NSString *message = (error ? [error description] : @"Success!");
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:title message:message delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
[alert show];
[alert release];
}
- (IBAction)saveButtonClicked:(id)sender {
UIImage *newImage = [self captureView:imageView];
UIImageWriteToSavedPhotosAlbum(newImage, self, @selector(imageSavedToPhotosAlbum: didFinishSavingWithError: contextInfo:), nil);
}
Theoretically, you could use the (now-allowed) undocumented call UIGetScreenImage() after quickly rendering it to the screen on a black background, but in practice this will be slow and ugly, so don't use it ;P.
3D transform on UIImage / CGImageRef
I've improved Marcos Fuentes answer. You should be able to calculate the mapping of each pixel yourself.. Not perfect, but it does the trick...
It is available on this repository http://github.com/hfossli/AGGeometryKit/
The interesting files is
https://github.com/hfossli/AGGeometryKit/blob/master/Source/AGTransformPixelMapper.m
https://github.com/hfossli/AGGeometryKit/blob/master/Source/CGImageRef%2BCATransform3D.m
https://github.com/hfossli/AGGeometryKit/blob/master/Source/UIImage%2BCATransform3D.m
3D transform on UIView / UIImageView
https://stackoverflow.com/a/12820877/202451
Then you will have full control over each point in the quadrilateral. :)
I had the same problem, I was able to use UIView's drawViewHierarchyInRect:afterScreenUpdates: method, from iOS 7.0 - (Documentation)
It draws the whole tree as it appears on the screen.
I ended up creating a render method pixel per pixel on the CPU using the inverse of the view transform.
Basically, it renders the original UIImageView into a UIImage. Then every pixel in the UIImage is multiplied by the inverse transform matrix to generate the transformed UIImage.
RenderUIImageView.h
RenderUIImageView.m
In your
captureView:
method, try replacing this line:with this:
You may have to adjust the size you use to create the image context.
I don't see anything in the API doc that says renderInContext: ignores 3D transformations. However, the transformations apply to the layer, not its contents, which is why you need to render the superlayer to see the transformation applied.
Note that calling
drawRect:
on the superview definitely won't work, asdrawRect:
does not draw subviews.A solution I found that at least worked in my case was to subclass
CALayer
. When arenderInContext:
message is sent to a layer, that layer automatically forwards that message to all its sublayers. So all I had to do was to subclassCALayer
and override therenderInContext:
method and render what I needed to be rendered in the provided context.For example, in my code I had a layer for which I was setting its contents to an image of an arrow:
Now when I was applying a 3D 180 degree rotation over the Y-axis on the arrow and was trying to do a
[self.mainLayer renderInContext:context]
afterwards I was still getting the un-rotated image.So in my subclass
MyLayer
I overroderenderInContext:
and used an already rotated image to draw in provided context:This worked in my case, however I can see that if you are doing lots of 3D transforms you may not be able to have an image ready for every possible scenario. In many other cases though it should be possible to render the result of 3D transform using 2D transforms in the passed context. For example in my case instead of using a different image
arrow_rotated.png
I could use thearrow.png
image and mirror it and draw it in the context.