Masking an image using bezierpath with image's

2019-03-03 05:50发布

enter image description here Hi, I have a path (shape) and a high-resolution image. I make the high res image to be AspectFit inside the view on which I draw the path and I want to mask the image with the path but at the full resolution of the image, not at the resolution in which we see the path. The problem, It works perfectly when I don't upscale them up for high-resolution masking but when I do, everything is messed up. The mask gets stretched and the origins don't make sense.

enter image description here

All I want is to be able to upscale the path with the same aspect ratio of the image (at the full resolution of the image) and position it correctly so It can mask the high res image properly. I've tried this:

Masking CGContext with a CGPathRef?

and this

Creating mask with CGImageMaskCreate is all black (iphone)

and this

Clip UIImage to UIBezierPath (not masking)

none of which works correctly when I try to mask a high quality image (bigger than screen resolution)

EDIT I posted a working project to show the problem between normal quality masking (at screen's resolution) and high quality masking (at image's resolution) on github. I'd really appreciate any help. https://github.com/Reza-Abdolahi/HighResMasking

2条回答
贼婆χ
2楼-- · 2019-03-03 06:09

As the secondary answer, I made it work with this code and for better understanding, you can get the working project on github here as well to see if it works on all cases or not. my github project : https://github.com/Reza-Abdolahi/HighResMasking

The part of code that solved the problem:

-(UIImage*)highResolutionMasking{
    NSLog(@"///High quality (Image resolution) masking///////////////////////////////////////////////////");

    //1.Rendering the path into an image with the size of _targetBound (which is the size of a device screen sized view in which the path is drawn.)
    CGFloat aspectRatioOfImageBasedOnHeight = _highResolutionImage.size.height/ _highResolutionImage.size.width;
    CGFloat aspectRatioOfTargetBoundBasedOnHeight = _targetBound.size.height/ _targetBound.size.width;

    CGFloat pathScalingFactor = 0;
    if ((_highResolutionImage.size.height >= _targetBound.size.height)||
        (_highResolutionImage.size.width  >= _targetBound.size.width)) {
            //Then image is bigger than targetBound

            if ((_highResolutionImage.size.height<=_highResolutionImage.size.width)) {
            //The image is Horizontal

                CGFloat newWidthForTargetBound =_highResolutionImage.size.width;
                CGFloat ratioOfHighresImgWidthToTargetBoundWidth = (_highResolutionImage.size.width/_targetBound.size.width);
                CGFloat newHeightForTargetBound = _targetBound.size.height* ratioOfHighresImgWidthToTargetBoundWidth;

                _bigTargetBound = CGRectMake(0, 0, newWidthForTargetBound, newHeightForTargetBound);
                pathScalingFactor = _highResolutionImage.size.width/_targetBound.size.width;

            }else if((_highResolutionImage.size.height > _highResolutionImage.size.width)&&
                     (aspectRatioOfImageBasedOnHeight  <= aspectRatioOfTargetBoundBasedOnHeight)){
                //The image is Vertical but has smaller aspect ratio (based on height) than targetBound

                CGFloat newWidthForTargetBound =_highResolutionImage.size.width;
                CGFloat ratioOfHighresImgWidthToTargetBoundWidth = (_highResolutionImage.size.width/_targetBound.size.width);
                CGFloat newHeightForTargetBound = _targetBound.size.height* ratioOfHighresImgWidthToTargetBoundWidth;

                _bigTargetBound = CGRectMake(0, 0, newWidthForTargetBound, newHeightForTargetBound);
                pathScalingFactor = _highResolutionImage.size.width/_targetBound.size.width;

            }else if((_highResolutionImage.size.height > _highResolutionImage.size.width)&&
                     (aspectRatioOfImageBasedOnHeight  > aspectRatioOfTargetBoundBasedOnHeight)){

                CGFloat newHeightForTargetBound =_highResolutionImage.size.height;
                CGFloat ratioOfHighresImgHeightToTargetBoundHeight = (_highResolutionImage.size.height/_targetBound.size.height);
                CGFloat newWidthForTargetBound = _targetBound.size.width* ratioOfHighresImgHeightToTargetBoundHeight;

                _bigTargetBound = CGRectMake(0, 0, newWidthForTargetBound, newHeightForTargetBound);
                pathScalingFactor = _highResolutionImage.size.height/_targetBound.size.height;
            }else{
                //Do nothing
            }
    }else{
            //Then image is smaller than targetBound
            _bigTargetBound = _imageRect;
            pathScalingFactor =1;
    }

    CGSize correctedSize = CGSizeMake(_highResolutionImage.size.width  *_scale,
                                      _highResolutionImage.size.height *_scale);

    _bigImageRect= AVMakeRectWithAspectRatioInsideRect(correctedSize,_bigTargetBound);

    //Scaling path
    CGAffineTransform scaleTransform = CGAffineTransformIdentity;
    scaleTransform = CGAffineTransformScale(scaleTransform, pathScalingFactor, pathScalingFactor);

    CGPathRef scaledCGPath = CGPathCreateCopyByTransformingPath(_examplePath.CGPath,&scaleTransform);

    //Render scaled path into image
    UIGraphicsBeginImageContextWithOptions(_bigTargetBound.size, NO, 1.0);
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGContextAddPath (context, scaledCGPath);
    CGContextSetFillColorWithColor (context, [UIColor redColor].CGColor);
    CGContextSetStrokeColorWithColor (context, [UIColor redColor].CGColor);
    CGContextDrawPath (context, kCGPathFillStroke);
    UIImage * pathImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    NSLog(@"High res pathImage.size: %@",NSStringFromCGSize(pathImage.size));

    //Cropping it from targetBound into imageRect
    _maskImage = [self cropThisImage:pathImage toRect:_bigImageRect];
    NSLog(@"High res _croppedRenderedPathImage.size: %@",NSStringFromCGSize(_maskImage.size));

    //Masking the high res image with my mask image which both have the same size now.
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGImageRef maskImageRef = [_maskImage CGImage];
    CGContextRef myContext = CGBitmapContextCreate (NULL, _highResolutionImage.size.width, _highResolutionImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
    CGColorSpaceRelease(colorSpace);
    if (myContext==NULL)
        return NULL;

    CGFloat ratio = 0;
    ratio = _maskImage.size.width/ _highResolutionImage.size.width;
    if(ratio * _highResolutionImage.size.height < _maskImage.size.height) {
        ratio = _maskImage.size.height/ _highResolutionImage.size.height;
    }

    CGRect rectForMask  = {{0, 0}, {_maskImage.size.width, _maskImage.size.height}};
    CGRect rectForImageDrawing  = {{-((_highResolutionImage.size.width*ratio)-_maskImage.size.width)/2 , -((_highResolutionImage.size.height*ratio)-_maskImage.size.height)/2},
        {_highResolutionImage.size.width*ratio, _highResolutionImage.size.height*ratio}};

    CGContextClipToMask(myContext, rectForMask, maskImageRef);
    CGContextDrawImage(myContext, rectForImageDrawing, _highResolutionImage.CGImage);
    CGImageRef newImage = CGBitmapContextCreateImage(myContext);
    CGContextRelease(myContext);
    UIImage *theImage = [UIImage imageWithCGImage:newImage];
    CGImageRelease(newImage);
    return theImage;
}

-(UIImage *)cropThisImage:(UIImage*)image toRect:(CGRect)rect{
    CGImageRef subImage = CGImageCreateWithImageInRect(image.CGImage, rect);
    UIImage *croppedImage = [UIImage imageWithCGImage:subImage];
    CGImageRelease(subImage);
    return croppedImage;
}
查看更多
Deceive 欺骗
3楼-- · 2019-03-03 06:15

If I understand your question correctly:

  • You have an image view containing an image that may have been scaled down (or even scaled up) using UIViewContentModeScaleAspectFit.
  • You have a bezier path whose points are in the geometry (coordinate system) of that image view.

And now you want to create a copy of the image, at its original resolution, masked by the bezier path.

We can think of the image as having its own geometry, with the origin at the top left corner of the image and one unit along each axis being one point. So what we need to do is:

  1. Create a graphics renderer big enough to draw the image into without scaling. The geometry of this renderer is the image's geometry.
  2. Transform the bezier path from the view geometry to the renderer geometry.
  3. Apply the transformed path to the renderer's clip region.
  4. Draw the image (untransformed) into the renderer.

Step 2 is the hard one, because we have to come up with the correct CGAffineTransform. In an aspect-fit scenario, the transform needs to not only scale the image, but possibly translate it along either the x axis or the y axis (but not both). But let's be more general and support other UIViewContentMode settings. Here's a category that lets you ask a UIImageView for the transform that converts points in the view's geometry to points in the image's geometry:

@implementation UIImageView (ImageGeometry)

/**
 * Return a transform that converts points in my geometry to points in the
 * image's geometry. The origin of the image's geometry is at its upper
 * left corner, and one unit along each axis is one point in the image.
 */
- (CGAffineTransform)imageGeometryTransform {
    CGRect viewBounds = self.bounds;
    CGSize viewSize = viewBounds.size;
    CGSize imageSize = self.image.size;

    CGFloat xScale = imageSize.width / viewSize.width;
    CGFloat yScale = imageSize.height / viewSize.height;
    CGFloat tx, ty;
    switch (self.contentMode) {
        case UIViewContentModeScaleToFill: tx = 0; ty = 0; break;
        case UIViewContentModeScaleAspectFit:
            if (xScale > yScale) { tx = 0; ty = 0.5; yScale = xScale; }
            else if (xScale < yScale) { tx = 0.5; ty = 0; xScale = yScale; }
            else { tx = 0; ty = 0; }
            break;
        case UIViewContentModeScaleAspectFill:
            if (xScale < yScale) { tx = 0; ty = 0.5; yScale = xScale; }
            else if (xScale > yScale) { tx = 0.5; ty = 0; xScale = yScale; }
            else { tx = 0; ty = 0; imageSize = viewSize; }
            break;
        case UIViewContentModeCenter: tx = 0.5; ty = 0.5; xScale = yScale = 1; break;
        case UIViewContentModeTop: tx = 0.5; ty = 0; xScale = yScale = 1; break;
        case UIViewContentModeBottom: tx = 0.5; ty = 1; xScale = yScale = 1; break;
        case UIViewContentModeLeft: tx = 0; ty = 0.5; xScale = yScale = 1; break;
        case UIViewContentModeRight: tx = 1; ty = 0.5; xScale = yScale = 1; break;
        case UIViewContentModeTopLeft: tx = 0; ty = 0; xScale = yScale = 1; break;
        case UIViewContentModeTopRight: tx = 1; ty = 0; xScale = yScale = 1; break;
        case UIViewContentModeBottomLeft: tx = 0; ty = 1; xScale = yScale = 1; break;
        case UIViewContentModeBottomRight: tx = 1; ty = 1; xScale = yScale = 1; break;
        default: return CGAffineTransformIdentity; // Mode not supported by UIImageView.
    }

    tx *= (imageSize.width - xScale * (viewBounds.origin.x + viewSize.width));
    ty *= (imageSize.height - yScale * (viewBounds.origin.y + viewSize.height));
    CGAffineTransform transform = CGAffineTransformMakeTranslation(tx, ty);
    transform = CGAffineTransformScale(transform, xScale, yScale);
    return transform;
}

@end

Armed with this, we can write the code that masks the image. In my test app, I have a subclass of UIImageView named PathEditingView that handles the bezier path editing. So my view controller creates the masked image like this:

- (UIImage *)maskedImage {
    UIImage *image = self.pathEditingView.image;
    UIGraphicsImageRendererFormat *format = [[UIGraphicsImageRendererFormat alloc] init];
    format.scale = image.scale;
    format.prefersExtendedRange = image.imageRendererFormat.prefersExtendedRange;
    format.opaque = NO;
    UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:image.size format:format];
    return [renderer imageWithActions:^(UIGraphicsImageRendererContext * _Nonnull rendererContext) {
        UIBezierPath *path = [self.pathEditingView.path copy];
        [path applyTransform:self.pathEditingView.imageGeometryTransform];
        CGContextRef gc = UIGraphicsGetCurrentContext();
        CGContextAddPath(gc, path.CGPath);
        CGContextClip(gc);
        [image drawAtPoint:CGPointZero];
    }];
}

And it looks like this:

masking demo

Of course it's hard to tell that the output image is full-resolution. Let's fix that by cropping the output image to the bounding box of the bezier path:

- (UIImage *)maskedAndCroppedImage {
    UIImage *image = self.pathEditingView.image;
    UIBezierPath *path = [self.pathEditingView.path copy];
    [path applyTransform:self.pathEditingView.imageGeometryTransform];
    CGRect pathBounds = CGPathGetPathBoundingBox(path.CGPath);
    UIGraphicsImageRendererFormat *format = [[UIGraphicsImageRendererFormat alloc] init];
    format.scale = image.scale;
    format.prefersExtendedRange = image.imageRendererFormat.prefersExtendedRange;
    format.opaque = NO;
    UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:pathBounds.size format:format];
    return [renderer imageWithActions:^(UIGraphicsImageRendererContext * _Nonnull rendererContext) {
        CGContextRef gc = UIGraphicsGetCurrentContext();
        CGContextTranslateCTM(gc, -pathBounds.origin.x, -pathBounds.origin.y);
        CGContextAddPath(gc, path.CGPath);
        CGContextClip(gc);
        [image drawAtPoint:CGPointZero];
    }];
}

Masking and cropping together look like this:

masking and cropping demo

You can see in this demo that the output image has much more detail than was visible in the input view, because it was generated at the full resolution of the input image.

查看更多
登录 后发表回答