OpenCV - Apply mask to a color image

2020-01-28 03:04发布

问题:

How can I apply mask to a color image in latest python binding (cv2)? In previous python binding the simplest way was to use cv.Copy e.g.

cv.Copy(dst, src, mask)

But this function is not available in cv2 binding. Is there any workaround without using boilerplate code?

回答1:

Here, you could use cv2.bitwise_and function if you already have the mask image.

For check the below code:

img = cv2.imread('lena.jpg')
mask = cv2.imread('mask.png',0)
res = cv2.bitwise_and(img,img,mask = mask)

The output will be as follows for a lena image, and for rectangular mask.



回答2:

Well, here is a solution if you want the background to be other than a solid black color. We only need to invert the mask and apply it in a background image of the same size and then combine both background and foreground. A pro of this solution is that the background could be anything (even other image).

This example is modified from Hough Circle Transform. First image is the OpenCV logo, second the original mask, third the background + foreground combined.

# http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_houghcircles/py_houghcircles.html
import cv2
import numpy as np

# load the image
img = cv2.imread('E:\\FOTOS\\opencv\\opencv_logo.png')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

# detect circles
gray = cv2.medianBlur(cv2.cvtColor(img, cv2.COLOR_RGB2GRAY), 5)
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1, 20, param1=50, param2=50, minRadius=0, maxRadius=0)
circles = np.uint16(np.around(circles))

# draw mask
mask = np.full((img.shape[0], img.shape[1]), 0, dtype=np.uint8)  # mask is only 
for i in circles[0, :]:
    cv2.circle(mask, (i[0], i[1]), i[2], (255, 255, 255), -1)

# get first masked value (foreground)
fg = cv2.bitwise_or(img, img, mask=mask)

# get second masked value (background) mask must be inverted
mask = cv2.bitwise_not(mask)
background = np.full(img.shape, 255, dtype=np.uint8)
bk = cv2.bitwise_or(background, background, mask=mask)

# combine foreground+background
final = cv2.bitwise_or(fg, bk)

Note: It is better to use the opencv methods because they are optimized.



回答3:

import cv2 as cv

im_color = cv.imread("lena.png", cv.IMREAD_COLOR)
im_gray = cv.cvtColor(im_color, cv.COLOR_BGR2GRAY)

At this point you have a color and a gray image. We are dealing with 8-bit, uint8 images here. That means the images can have pixel values in the range of [0, 255] and the values have to be integers.

Let's do a binary thresholding operation. It creates a black and white masked image. The black regions have value 0 and the white regions 255

_, mask = cv.threshold(im_gray, thresh=180, maxval=255, type=cv.THRESH_BINARY)
im_thresh_gray = cv.bitwise_and(im_gray, mask)

The mask can be seen below on the left. The image on it's right is the result of applying bitwise_and operation between the gray image and the mask. What happened is, the spatial locations where the mask had a pixel value zero (black), became pixel value zero in the result image. The locations where the mask had pixel value 255 (white), the resulting image retained it's original gray value.

To apply this mask to our original color image, we need to convert the mask into a 3 channel image as the original color image is a 3 channel image.

mask3 = cv.cvtColor(mask, cv.COLOR_GRAY2BGR)  # 3 channel mask

Then, we can apply this mask to our original color image using the same bitwise_and function.

im_thresh_color = cv.bitwise_and(im_color, mask3)

mask3 from the code is the image below on the left, and im_thresh_color is on it's right.

You can plot the results and see for yourself.

cv.imshow("original image", im_color)
cv.imshow("binary mask", mask)
cv.imshow("3 channel mask", mask3)
cv.imshow("im_thresh_gray", im_thresh_gray)
cv.imshow("im_thresh_color", im_thresh_color)
cv.waitKey(0)

The original image is lenacolor.png that I found here.



回答4:

The other methods described assume a binary mask. If you want to use a real-valued single-channel grayscale image as a mask (e.g. from an alpha channel), you can expand it to three channels and then use it for interpolation:

assert len(mask.shape) == 2 and issubclass(mask.dtype.type, np.floating)
assert len(foreground_rgb.shape) == 3
assert len(background_rgb.shape) == 3

alpha3 = np.stack([mask]*3, axis=2)
blended = alpha3 * foreground_rgb + (1. - alpha3) * background_rgb

Note that mask needs to be in range 0..1 for the operation to succeed. It is also assumed that 1.0 encodes keeping the foreground only, while 0.0 means keeping only the background.

If the mask may have the shape (h, w, 1), this helps:

alpha3 = np.squeeze(np.stack([np.atleast_3d(mask)]*3, axis=2))

Here np.atleast_3d(mask) makes the mask (h, w, 1) if it is (h, w) and np.squeeze(...) reshapes the result from (h, w, 3, 1) to (h, w, 3).



回答5:

Traverse each step in notebook car image

Explanation with example

import cv2

import numpy as np



img1 = cv2.imread("road.jpg")

img2 = cv2.imread("car.jpg")

img2_gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

ret, mask = cv2.threshold(img2_gray, 240, 255, cv2.THRESH_BINARY)

mask_inv = cv2.bitwise_not(mask)


road = cv2.bitwise_and(img1, img1, mask=mask)


car = cv2.bitwise_and(img2, img2, mask=mask_inv)

result = cv2.add(road, car)


cv2.imshow("img1", img1)
cv2.imshow("img2", img2)
cv2.imshow("road background", road)

cv2.imshow("car no background", car)

cv2.imshow("mask", mask)
cv2.imshow("mask inverse", mask_inv)

cv2.imshow("result", result)

cv2.waitKey(0)
cv.destroyAllWindows()


回答6:

Answer given by Abid Rahman K is not completely correct. I also tried it and found very helpful but got stuck.

This is how I copy image with a given mask.

x, y = np.where(mask!=0)
pts = zip(x, y)
# Assuming dst and src are of same sizes
for pt in pts:
   dst[pt] = src[pt]

This is a bit slow but gives correct results.

EDIT:

Pythonic way.

idx = (mask!=0)
dst[idx] = src[idx]