Get image pixels using mask

2019-04-16 21:49发布

问题:

So i have some original image.

I need to get and display some part of this image using specific mask. Mask is not a rectangle or a shape. It contains different polygons and shapes.

Are there any methods or tutorials how to implement that? Or from where to start to make this? Shall i write some small shader or compare appropriate pixels of the images or something like this?

Will be glad for any advises.

Thank you

回答1:

You can put your image in a UIImageView and add a mask to that image view with something like

myImageView.layer.mask = myMaskLayer;

To draw your custom shapes, myMaskLayer should be an instance of your own subclass of CALayer that implements the drawInContext method. The method should draw the areas to be shown in the image with full alpha and the areas to be hidden with zero alpha (and can also use in between alphas if you like). Here's an example implementation of a CALayer subclass that, when used as a mask on the image view, will cause only an oval area in the top left corner of the image to be shown:

@interface MyMaskLayer : CALayer
@end

@implementation MyMaskLayer
- (void)drawInContext:(CGContextRef)ctx {
    static CGColorSpaceRef rgbColorSpace;
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    });

    const CGRect selfBounds = self.bounds;
    CGContextSetFillColorSpace(ctx, rgbColorSpace);
    CGContextSetFillColor(ctx, (CGFloat[4]){0.0, 0.0, 0.0, 0.0});
    CGContextFillRect(ctx, selfBounds);
    CGContextSetFillColor(ctx, (CGFloat[4]){0.0, 0.0, 0.0, 1.0});
    CGContextFillEllipseInRect(ctx, selfBounds);
}
@end

To apply such a mask layer to an image view, you might use code like this

MyMaskLayer *maskLayer = [[MyMaskLayer alloc] init];
maskLayer.bounds = self.view.bounds;
[maskLayer setNeedsDisplay];
self.imageView.layer.mask = maskLayer;

Now, if you want to get pixel data from a rendering like this it's best to render into a CGContext backed by a bitmap where you control the pixel buffer. You can inspect the pixel data in your buffer after rendering. Here's an example where I create a bitmap context and render a layer and mask into that bitmap context. Since I supply my own buffer for the pixel data, I can then go poking around in the buffer to get RGBA values after rendering.

const CGRect imageViewBounds = self.imageView.bounds;
const size_t imageWidth = CGRectGetWidth(imageViewBounds);
const size_t imageHeight = CGRectGetHeight(imageViewBounds);
const size_t bytesPerPixel = 4;
// Allocate your own buffer for the bitmap data.  You can pass NULL to
// CGBitmapContextCreate and then Quartz will allocate the memory for you, but
// you won't have a pointer to the resulting pixel data you want to inspect!
unsigned char *bitmapData = (unsigned char *)malloc(imageWidth * imageHeight * bytesPerPixel);
CGContextRef context = CGBitmapContextCreate(bitmapData, imageWidth, imageHeight, 8, bytesPerPixel * imageWidth, rgbColorSpace, kCGImageAlphaPremultipliedLast);
// Render the image into the bitmap context.
[self.imageView.layer renderInContext:context];
// Set the blend mode to destination in, pixels become "destination * source alpha"
CGContextSetBlendMode(context, kCGBlendModeDestinationIn);
// Render the mask into the bitmap context.
[self.imageView.layer.mask renderInContext:context];
// Whatever you like here, I just chose the middle of the image.
const size_t x = imageWidth / 2;
const size_t y = imageHeight / 2;
const size_t pixelIndex = y * imageWidth + x;
const unsigned char red = bitmapData[pixelIndex];
const unsigned char green = bitmapData[pixelIndex + 1];
const unsigned char blue = bitmapData[pixelIndex + 2];
const unsigned char alpha = bitmapData[pixelIndex + 3];
NSLog(@"rgba: {%u, %u, %u, %u}", red, green, blue, alpha);