I have a FPV (First Person View) receiver that shows receiving frames from a FPV camera mounted on a drone. When the transmitter is working, receiver shows a camera view. Otherwise, if connection is lost or the transmitter is not working, it shows noise frames.
The noise frames have random patterns (sometimes with more white pixels and sometimes with more black pixels). I want to detect those noise frames using OpenCV in Python in an efficient way. I know that OpenCV has a method called cv2.fastNlMeansDenoisingColored()
. But in this case, I want to detect the noise frames not noise in each frame.
A sample of noise frames is attached:
Another noise frame example:
A valid frame (that could be anything):
Given the assumptions, that your valid video frames have at least a certain amount of color information, and that your noise frames are more or less black and white, there might be a simple approach using the saturation channel from the HSV color space.
- Convert image to HSV color space using, see
cv2.cvtColor
.
- Calculate the histogram of the saturation channel, see
cv2.calcHist
.
- Calculate percentage of pixels with a minimum saturation, let's say at least
0.05
.
- If that percentage exceeds a threshold, let's say
0.5
, then at least fifty percent of all pixels have a saturation of at least 0.05
, so this frame seems to be a valid frame. (Adapt the thresholds, if needed.)
import cv2
from matplotlib import pyplot as plt
import numpy as np
from skimage import io # Only needed for web grabbing images, use cv2.imread for local images
def is_valid(image):
# Convert image to HSV color space
image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# Calculate histogram of saturation channel
s = cv2.calcHist([image], [1], None, [256], [0, 256])
# Calculate percentage of pixels with saturation >= p
p = 0.05
s_perc = np.sum(s[int(p * 255):-1]) / np.prod(image.shape[0:2])
##### Just for visualization and debug; remove in final
plt.plot(s)
plt.plot([p * 255, p * 255], [0, np.max(s)], 'r')
plt.text(p * 255 + 5, 0.9 * np.max(s), str(s_perc))
plt.show()
##### Just for visualization and debug; remove in final
# Percentage threshold; above: valid image, below: noise
s_thr = 0.5
return s_perc > s_thr
# Read example images; convert to grayscale
noise1 = cv2.cvtColor(io.imread('https://i.stack.imgur.com/Xz9l0.png'), cv2.COLOR_RGB2BGR)
noise2 = cv2.cvtColor(io.imread('https://i.stack.imgur.com/9ZPAj.jpg'), cv2.COLOR_RGB2BGR)
valid = cv2.cvtColor(io.imread('https://i.stack.imgur.com/0FNPQ.jpg'), cv2.COLOR_RGB2BGR)
for img in [noise1, noise2, valid]:
print(is_valid(img))
The visualization outputs (in the order as presented in the question):
And, the main output:
False
False
True
Removing the whole visualization stuff, the is_valid
call needs less than 0.01 seconds per image on my machine. Not sure, which hardware you have when doing your recordings, but maybe this approach is also suitable for some "real-time" processing with a sufficient frame rate.
One last remark: I tried to get rid of the OpenCV histogram, and calculate the percentage directly using NumPy, but that took more time than the presented approach. Strange.
Hope that helps!