How can I quantify difference between two images?

2019-01-02 16:43发布

Here's what I would like to do:

I'm taking pictures with a webcam at regular intervals. Sort of like a time lapse thing. However, if nothing has really changed, that is, the picture pretty much looks the same, I don't want to store the latest snapshot.

I imagine there's some way of quantifying the difference, and I would have to empirically determine a threshold.

I'm looking for simplicity rather than perfection. I'm using python.

20条回答
春风洒进眼中
2楼-- · 2019-01-02 16:55

A somewhat more principled approach is to use a global descriptor to compare images, such as GIST or CENTRIST. A hash function, as described here, also provides a similar solution.

查看更多
人间绝色
3楼-- · 2019-01-02 16:56

Two popular and relatively simple methods are: (a) the Euclidean distance already suggested, or (b) normalized cross-correlation. Normalized cross-correlation tends to be noticeably more robust to lighting changes than simple cross-correlation. Wikipedia gives a formula for the normalized cross-correlation. More sophisticated methods exist too, but they require quite a bit more work.

Using numpy-like syntax,

dist_euclidean = sqrt(sum((i1 - i2)^2)) / i1.size

dist_manhattan = sum(abs(i1 - i2)) / i1.size

dist_ncc = sum( (i1 - mean(i1)) * (i2 - mean(i2)) ) / (
  (i1.size - 1) * stdev(i1) * stdev(i2) )

assuming that i1 and i2 are 2D grayscale image arrays.

查看更多
忆尘夕之涩
4楼-- · 2019-01-02 16:56

I had the same problem and wrote a simple python module which compares two same-size images using pillow's ImageChops to create a black/white diff image and sums up the histogram values.

You can get either this score directly, or a percentage value compared to a full black vs. white diff.

It also contains a simple is_equal function, with the possibility to supply a fuzzy-threshold under (and including) the image passes as equal.

The approach is not very elaborate, but maybe is of use for other out there struggling with the same issue.

https://pypi.python.org/pypi/imgcompare/

查看更多
余生请多指教
5楼-- · 2019-01-02 16:56

I think you could simply compute the euclidean distance (i.e. sqrt(sum of squares of differences, pixel by pixel)) between the luminance of the two images, and consider them equal if this falls under some empirical threshold. And you would better do it wrapping a C function.

查看更多
无与为乐者.
6楼-- · 2019-01-02 16:58

A trivial thing to try:

Resample both images to small thumbnails (e.g. 64 x 64) and compare the thumbnails pixel-by-pixel with a certain threshold. If the original images are almost the same, the resampled thumbnails will be very similar or even exactly the same. This method takes care of noise that can occur especially in low-light scenes. It may even be better if you go grayscale.

查看更多
何处买醉
7楼-- · 2019-01-02 16:58

Most of the answers given won't deal with lighting levels.

I would first normalize the image to a standard light level before doing the comparison.

查看更多
登录 后发表回答