Python, OpenCV: Increasing image brightness withou

2020-01-29 11:51发布

问题:

I am trying to increase brightness of a grayscale image. cv2.imread() returns a numpy array. I am adding integer value to every element of the array. Theoretically, this would increase each of them. After that I would be able to put upper threshold of 255 and get the image with the higher brightness.

Here is the code:

grey = cv2.imread(path+file,0)

print type(grey)

print grey[0]

new = grey + value

print new[0]

res = np.hstack((grey, new))

cv2.imshow('image', res)
cv2.waitKey(0)
cv2.destroyAllWindows()

However, internal OpenCV routine apparently does something like that:

new_array = old_array % 255

Every pixel intensity value higher than 255 becomes a remainder of dividing by 255.

As a result, I am getting dark instead of completely white.

Here is the output:

<type 'numpy.ndarray'>
[115 114 121 ..., 170 169 167]
[215 214 221 ...,  14  13  11]

And here is the image:

How can I switch off this remainder mechanism? Is there any better way to increase brightness in OpenCV?

回答1:

One idea would be to check before adding value whether the addition would result in an overflow by checking the difference between 255 and the current pixel value and checking if it's within value. If it does, we won't add value, we would directly set those at 255, otherwise we would do the addition. Now, this decision making could be eased up with a mask creation and would be -

mask = (255 - grey) < value

Then, feed this mask/boolean array to np.where to let it choose between 255 and grey+value based on the mask.

Thus, finally we would have the implementation as -

grey_new = np.where((255 - grey) < value,255,grey+value)

Sample run

Let's use a small representative example to demonstrate the steps.

In [340]: grey
Out[340]: 
array([[125, 212, 104, 180, 244],
       [105,  26, 132, 145, 157],
       [126, 230, 225, 204,  91],
       [226, 181,  43, 122, 125]], dtype=uint8)

In [341]: value = 100

In [342]: grey + 100 # Bad results (e.g. look at (0,1))
Out[342]: 
array([[225,  56, 204,  24,  88],
       [205, 126, 232, 245,   1],
       [226,  74,  69,  48, 191],
       [ 70,  25, 143, 222, 225]], dtype=uint8)

In [343]: np.where((255 - grey) < 100,255,grey+value) # Expected results
Out[343]: 
array([[225, 255, 204, 255, 255],
       [205, 126, 232, 245, 255],
       [226, 255, 255, 255, 191],
       [255, 255, 143, 222, 225]], dtype=uint8)

Testing on sample image

Using the sample image posted in the question to give us arr and using value as 50, we would have -



回答2:

Here is another alternative:

# convert data type
gray = gray.astype('float32')

# shift pixel intensity by a constant
intensity_shift = 50
gray += intensity_shift

# another option is to use a factor value > 1:
# gray *= factor_intensity

# clip pixel intensity to be in range [0, 255]
gray = np.clip(gray, 0, 255)

# change type back to 'uint8'
gray = gray.astype('uint8)


回答3:

Briefly, you should add 50 to each value, find maxBrightness, then thisPixel = int(255 * thisPixel / maxBrightness)

You have to run a check for an overflow for each pixel. The method suggested by Divakar is straightforward and fast. You actually might want to increment (by 50 in your case) each value and then normalize it to 255. This would preserve details in bright areas of your image.



回答4:

An alternate approach that worked efficiently for me is to "blend in" a white image to the original image using the blend function in the PIL>Image library.

from PIL import Image
correctionVal = 0.05 # fraction of white to add to the main image
img_file = Image.open(location_filename)
img_file_white = Image.new("RGB", (width, height), "white")
img_blended = Image.blend(img_file, img_file_white, correctionVal)

img_blended = img_file * (1 - correctionVal) + img_file_white * correctionVal

Hence, if correctionVal = 0, we get the original image, and if correctionVal = 1, we get pure white.

This function self-corrects for RGB values exceeding 255.

Blending in black (RGB 0, 0, 0) reduces brightness.