Create a mask from difference between two images (

2019-02-02 17:02发布

How can I detect the difference between 2 images, creating a mask of the area that's different in order to process the area that's common to both images (gaussian blur for example)?

sketch

EDIT: I'm currently using this code to get the RGBA value of pixels:

+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
    NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];

    // First get the image into your data buffer
    CGImageRef imageRef = [image CGImage];
    NSUInteger width = CGImageGetWidth(imageRef);
    NSUInteger height = CGImageGetHeight(imageRef);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    unsigned char *rawData = malloc(height * width * 4);
    NSUInteger bytesPerPixel = 4;
    NSUInteger bytesPerRow = bytesPerPixel * width;
    NSUInteger bitsPerComponent = 8;
    CGContextRef context = CGBitmapContextCreate(rawData, width, height,
                    bitsPerComponent, bytesPerRow, colorSpace,
                    kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGColorSpaceRelease(colorSpace);

    CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
    CGContextRelease(context);

    // Now your rawData contains the image data in the RGBA8888 pixel format.
    int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
    for (int ii = 0 ; ii < count ; ++ii)
    {
        CGFloat red   = (rawData[byteIndex]     * 1.0) / 255.0;
        CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
        CGFloat blue  = (rawData[byteIndex + 2] * 1.0) / 255.0;
        CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
        byteIndex += 4;

        UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
        [result addObject:acolor];
    }

  free(rawData);

  return result;
}

The problem is, the images are being captured from the iPhone's camera so they are not exactly the same position. I need to create areas of a couple of pixels and extracting the general color of the area (maybe by adding up the RGBA values and dividing by the number of pixels?). How could I do this and then translate it to a CGMask?

I know this is a complex question, so any help is appreciated.

Thanks.

5条回答
Viruses.
2楼-- · 2019-02-02 17:56

I think the simplest way to do this would be to use a difference blend mode. The following code is based on code I use in CKImageAdditions.

+ (UIImage *) differenceOfImage:(UIImage *)top withImage:(UIImage *)bottom {
    CGImageRef topRef = [top CGImage];
    CGImageRef bottomRef = [bottom CGImage];

    // Dimensions
    CGRect bottomFrame = CGRectMake(0, 0, CGImageGetWidth(bottomRef), CGImageGetHeight(bottomRef));
    CGRect topFrame = CGRectMake(0, 0, CGImageGetWidth(topRef), CGImageGetHeight(topRef));
    CGRect renderFrame = CGRectIntegral(CGRectUnion(bottomFrame, topFrame));

    // Create context
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    if(colorSpace == NULL) {
        printf("Error allocating color space.\n");
        return NULL;
    }

    CGContextRef context = CGBitmapContextCreate(NULL,
                                                 renderFrame.size.width,
                                                 renderFrame.size.height,
                                                 8,
                                                 renderFrame.size.width * 4,
                                                 colorSpace,
                                                 kCGImageAlphaPremultipliedLast);
    CGColorSpaceRelease(colorSpace);

    if(context == NULL) {
        printf("Context not created!\n");
        return NULL;
    }

    // Draw images
    CGContextSetBlendMode(context, kCGBlendModeNormal);
    CGContextDrawImage(context, CGRectOffset(bottomFrame, -renderFrame.origin.x, -renderFrame.origin.y), bottomRef);
    CGContextSetBlendMode(context, kCGBlendModeDifference);
    CGContextDrawImage(context, CGRectOffset(topFrame, -renderFrame.origin.x, -renderFrame.origin.y), topRef);

    // Create image from context
    CGImageRef imageRef = CGBitmapContextCreateImage(context);
    UIImage * image = [UIImage imageWithCGImage:imageRef];
    CGImageRelease(imageRef);

    CGContextRelease(context);

    return image;
}
查看更多
冷血范
3楼-- · 2019-02-02 17:56

Go through the pixels, copy the ones that are different in the lower image to a new one (not opaque).

Blur the upper one completely, then show the new one above.

查看更多
姐就是有狂的资本
4楼-- · 2019-02-02 17:57

There are three reasons pixels will change from one iPhone photo to the next, the subject changed, the iPhone moved, and random noise. I assume for this question, you're most interested in the subject changes, and you want to process out the effects of the other two changes. I also assume the app intends the user to keep the iPhone reasonably still, so iPhone movement changes are less significant than subject changes.

To reduce the effects of random noise, just blur the image a little. A simple averaging blur, where each pixel in the resulting image is an average of the original pixel with its nearest neighbors should be sufficient to smooth out any noise in a reasonably well lit iPhone image.

To address iPhone movement, you can run a feature detection algorithm on each image (look up feature detection on Wikipedia for a start). Then calculate the transforms needed to align the least changed detected features.

Apply that transform to the blurred images, and find the difference between the images. Any pixels with a sufficient difference will become your mask. You can then process the mask to eliminate any islands of changed pixels. For example, a subject may be wearing a solid colored shirt. The subject may move from one image to the next, but the area of the solid colored shirt may overlap resulting in a mask with a hole in the middle.

In other words, this is a significant and difficult image processing problem. You won't find the answer in a stackoverflow.com post. You will find the answer in a digital image processing textbook.

查看更多
\"骚年 ilove
5楼-- · 2019-02-02 17:58

Can't you just subtract pixel values from the images, and process pixels where the difference i 0?

查看更多
混吃等死
6楼-- · 2019-02-02 18:04

Every pixel which does not have a suitably similar pixel in the other image within a certain radius can be deemed to be part of the mask. It's slow, (though there's not much that would be faster) but it works fairly simply.

查看更多
登录 后发表回答