I was wondering if there is a way to determine if an image is blurry or not by analyzing the image data.
相关问题
- How to get the background from multiple images by
- Try to load image with Highgui.imread (OpenCV + An
- CV2 Image Error: error: (-215:Assertion failed) !s
- Is it a bug of opencv RotatedRect?
- How do I apply a perspective transform with more t
相关文章
- How do I append metadata to an image in Matlab?
- opencv fails to build with ipp support enabled
- Code completion is not working for OpenCV and Pyth
- Face unlock code in Android open source project?
- Python open jp2 medical images - Scipy, glymur
- How to compile a c++ application using static open
- On a 64 bit machine, can I safely operate on indiv
- Converting PIL Image to GTK Pixbuf
Answers above elucidated many things, but I think it is useful to make a conceptual distinction.
What if you take a perfectly on-focus picture of a blurred image?
The blurring detection problem is only well posed when you have a reference. If you need to design, e.g., an auto-focus system, you compare a sequence of images taken with different degrees of blurring, or smoothing, and you try to find the point of minimum blurring within this set. I other words you need to cross reference the various images using one of the techniques illustrated above (basically--with various possible levels of refinement in the approach--looking for the one image with the highest high-frequency content).
Matlab code of two methods that have been published in highly regarded journals (IEEE Transactions on Image Processing) are available here: https://ivulab.asu.edu/software
check the CPBDM and JNBM algorithms. If you check the code it's not very hard to be ported and incidentally it is based on the Marzialiano's method as basic feature.
One way which I'm currently using measures the spread of edges in the image. Look for this paper:
It's usually behind a paywall but I've seen some free copies around. Basically, they locate vertical edges in an image, and then measure how wide those edges are. Averaging the width gives the final blur estimation result for the image. Wider edges correspond to blurry images, and vice versa.
This problem belongs to the field of no-reference image quality estimation. If you look it up on Google Scholar, you'll get plenty of useful references.
EDIT
Here's a plot of the blur estimates obtained for the 5 images in nikie's post. Higher values correspond to greater blur. I used a fixed-size 11x11 Gaussian filter and varied the standard deviation (using imagemagick's
convert
command to obtain the blurred images).If you compare images of different sizes, don't forget to normalize by the image width, since larger images will have wider edges.
Finally, a significant problem is distinguishing between artistic blur and undesired blur (caused by focus miss, compression, relative motion of the subject to the camera), but that is beyond simple approaches like this one. For an example of artistic blur, have a look at the Lenna image: Lenna's reflection in the mirror is blurry, but her face is perfectly in focus. This contributes to a higher blur estimate for the Lenna image.
Another very simple way to estimate the sharpness of an image is to use a Laplace (or LoG) filter and simply pick the maximum value. Using a robust measure like a 99.9% quantile is probably better if you expect noise (i.e. picking the Nth-highest contrast instead of the highest contrast.) If you expect varying image brightness, you should also include a preprocessing step to normalize image brightness/contrast (e.g. histogram equalization).
I've implemented Simon's suggestion and this one in Mathematica, and tried it on a few test images:
The first test blurs the test images using a Gaussian filter with a varying kernel size, then calculates the FFT of the blurred image and takes the average of the 90% highest frequencies:
Result in a logarithmic plot:
The 5 lines represent the 5 test images, the X axis represents the Gaussian filter radius. The graphs are decreasing, so the FFT is a good measure for sharpness.
This is the code for the "highest LoG" blurriness estimator: It simply applies an LoG filter and returns the brightest pixel in the filter result:
Result in a logarithmic plot:
The spread for the un-blurred images is a little better here (2.5 vs 3.3), mainly because this method only uses the strongest contrast in the image, while the FFT is essentially a mean over the whole image. The functions are also decreasing faster, so it might be easier to set a "blurry" threshold.
I came up with a totally different solution. I needed to analyse video still frames to find the sharpest one in every (X) frames. This way, I would detect motion blur and/or out of focus images.
I ended up using Canny Edge detection and I got VERY VERY good results with almost every kind of video (with nikie's method, I had problems with digitalized VHS videos and heavy interlaced videos).
I optimized the performance by setting a region of interest (ROI) on the original image.
Using EmguCV :
Thanks nikie for that great Laplace suggestion. OpenCV docs pointed me in the same direction: using python, cv2 (opencv 2.4.10), and numpy...
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
numpy.max(cv2.convertScaleAbs(cv2.Laplacian(gray_image,3)))
result is between 0-255. I found anything over 200ish is very in focus, and by 100, it's noticeably blurry. the max never really gets much under 20 even if it's completely blurred.