Here's what I would like to do:
I'm taking pictures with a webcam at regular intervals. Sort of like a time lapse thing. However, if nothing has really changed, that is, the picture pretty much looks the same, I don't want to store the latest snapshot.
I imagine there's some way of quantifying the difference, and I would have to empirically determine a threshold.
I'm looking for simplicity rather than perfection. I'm using python.
I have been having a lot of luck with jpg images taken with the same camera on a tripod by (1) simplifying greatly (like going from 3000 pixels wide to 100 pixels wide or even fewer) (2) flattening each jpg array into a single vector (3) pairwise correlating sequential images with a simple correlate algorithm to get correlation coefficient (4) squaring correlation coefficient to get r-square (i.e fraction of variability in one image explained by variation in the next) (5) generally in my application if r-square < 0.9, I say the two images are different and something happened in between.
This is robust and fast in my implementation (Mathematica 7)
It's worth playing around with the part of the image you are interested in and focussing on that by cropping all images to that little area, otherwise a distant-from-the-camera but important change will be missed.
I don't know how to use Python, but am sure it does correlations, too, no?
Another nice, simple way to measure the similarity between two images:
If others are interested in a more powerful way to compare image similarity, I put together a tutorial and web app for measuring and visualizing similar images using Tensorflow.
Check out how Haar Wavelets are implemented by isk-daemon. You could use it's imgdb C++ code to calculate the difference between images on-the-fly:
output:
False
True
image2\5.jpg image1\815.jpg
image2\6.jpg image1\819.jpg
image2\7.jpg image1\900.jpg
image2\8.jpg image1\998.jpg
image2\9.jpg image1\1012.jpg
the example pictures:
815.jpg
5.jpg
I apologize if this is too late to reply, but since I've been doing something similar I thought I could contribute somehow.
Maybe with OpenCV you could use template matching. Assuming you're using a webcam as you said:
Tip: max_val (or min_val depending on the method used) will give you numbers, large numbers. To get the difference in percentage, use template matching with the same image -- the result will be your 100%.
Pseudo code to exemplify:
Hope it helps.
General idea
Option 1: Load both images as arrays (
scipy.misc.imread
) and calculate an element-wise (pixel-by-pixel) difference. Calculate the norm of the difference.Option 2: Load both images. Calculate some feature vector for each of them (like a histogram). Calculate distance between feature vectors rather than images.
However, there are some decisions to make first.
Questions
You should answer these questions first:
Are images of the same shape and dimension?
If not, you may need to resize or crop them. PIL library will help to do it in Python.
If they are taken with the same settings and the same device, they are probably the same.
Are images well-aligned?
If not, you may want to run cross-correlation first, to find the best alignment first. SciPy has functions to do it.
If the camera and the scene are still, the images are likely to be well-aligned.
Is exposure of the images always the same? (Is lightness/contrast the same?)
If not, you may want to normalize images.
But be careful, in some situations this may do more wrong than good. For example, a single bright pixel on a dark background will make the normalized image very different.
Is color information important?
If you want to notice color changes, you will have a vector of color values per point, rather than a scalar value as in gray-scale image. You need more attention when writing such code.
Are there distinct edges in the image? Are they likely to move?
If yes, you can apply edge detection algorithm first (e.g. calculate gradient with Sobel or Prewitt transform, apply some threshold), then compare edges on the first image to edges on the second.
Is there noise in the image?
All sensors pollute the image with some amount of noise. Low-cost sensors have more noise. You may wish to apply some noise reduction before you compare images. Blur is the most simple (but not the best) approach here.
What kind of changes do you want to notice?
This may affect the choice of norm to use for the difference between images.
Consider using Manhattan norm (the sum of the absolute values) or zero norm (the number of elements not equal to zero) to measure how much the image has changed. The former will tell you how much the image is off, the latter will tell only how many pixels differ.
Example
I assume your images are well-aligned, the same size and shape, possibly with different exposure. For simplicity, I convert them to grayscale even if they are color (RGB) images.
You will need these imports:
Main function, read two images, convert to grayscale, compare and print results:
How to compare.
img1
andimg2
are 2D SciPy arrays here:If the file is a color image,
imread
returns a 3D array, average RGB channels (the last array axis) to obtain intensity. No need to do it for grayscale images (e.g..pgm
):Normalization is trivial, you may choose to normalize to [0,1] instead of [0,255].
arr
is a SciPy array here, so all operations are element-wise:Run the
main
function:Now you can put this all in a script and run against two images. If we compare image to itself, there is no difference:
If we blur the image and compare to the original, there is some difference:
P.S. Entire compare.py script.
Update: relevant techniques
As the question is about a video sequence, where frames are likely to be almost the same, and you look for something unusual, I'd like to mention some alternative approaches which may be relevant:
I strongly recommend taking a look at “Learning OpenCV” book, Chapters 9 (Image parts and segmentation) and 10 (Tracking and motion). The former teaches to use Background subtraction method, the latter gives some info on optical flow methods. All methods are implemented in OpenCV library. If you use Python, I suggest to use OpenCV ≥ 2.3, and its
cv2
Python module.The most simple version of the background subtraction:
More advanced versions make take into account time series for every pixel and handle non-static scenes (like moving trees or grass).
The idea of optical flow is to take two or more frames, and assign velocity vector to every pixel (dense optical flow) or to some of them (sparse optical flow). To estimate sparse optical flow, you may use Lucas-Kanade method (it is also implemented in OpenCV). Obviously, if there is a lot of flow (high average over max values of the velocity field), then something is moving in the frame, and subsequent images are more different.
Comparing histograms may help to detect sudden changes between consecutive frames. This approach was used in Courbon et al, 2010: