I know that original_image * filter = blur_image
, where *
is the convolution. Thus, filter = ifft(fft(blur)/fft(original))
I have an original image, the known filter, and the known blurred image. I tried the following code. I just want to compare the computed filter using fft and ifft and compare it with the known filter. I tried in Matlab:
orig = imread("orig.png")
blur = imread("blur.png")
fftorig = fft(orig)
fftblur = fft(blur)
div = fftblur/fftorig
conv = ifft(div)
The result doesn't make sense. I see that div
contains many NaN
values, and fftblur
and fftorig
both contain many values of 0. Do I need to do something to this? such as use fftshift
?
EDIT:
To make this easier to understand, I'm now using the images from: http://matlabgeeks.com/tips-tutorials/how-to-blur-an-image-with-a-fourier-transform-in-matlab-part-i/
I decided to compute the kernel of the origimage
and blurimageunpad
from that link using:
kernelc = real(ifft2(fft2(origimage)./fft2(blurimageunpad));
imagesc(kernelc)
colormap gray
Here's the result:
https://imgur.com/a/b7uvj
Which clearly doesn't match the gaussian blur mentioned at towards the top of that link
This is where the Wiener filter comes in handy. It is typically applied for deconvolution -- estimating the original, unfiltered image from the filtered image and the convolution kernel. However, because of the commutativity of the convolution, deconvolution is exactly the same problem you are trying to solve. That is, if g = f * h, then you can estimate f from g and h (deconvolution), or you can estimate h from g and f, in exactly the same manner.
Deconvolution
The Wiener filter Wikipedia page is heavy on the math, but in a simplistic way, you look for frequencies where your kernel has small values (compared to the noise in your image), and you don't apply the division at those frequencies. This is called regularization, where you impose some constraints to avoid noise dominating the results.
This is the MATLAB code, using in
for the blurred input image, and psf
for the spatial-domain filter:
% Create a PSF and a blurred image:
original = imread('cameraman.tif');
sz = size(original);
psf = zeros(sz);
psf(fix(sz(1)/2)+(-5:5),fix(sz(1)/2)+(-10:10)) = 1;
psf = psf/sum(psf(:));
% in = conv2(original,psf,'same'); % gives boundary issues, less of a problem for smaller psf
in = uint8(ifft2(fft2(original).*fft2(ifftshift(psf))));
% Input parameter:
K = 1e-2;
% Compute frequency-domain PSF:
H = fft2(ifftshift(psf));
% Compute the Wiener filter:
cH = conj(H);
HcH = H .* cH;
K = K * max(max(abs(HcH)));
w = cH ./ (HcH + K);
% Apply the Wiener filter in the Frequency domain:
out = real(ifft2(w .* fft2(in)));
Here is the image in
and the output of the Wiener filter for three different values of K
:
As you can see, selecting the right K
is very important. If it is too low, there is not sufficient regularization. If it is too high then there are too many artefacts (insufficient deconvolution). This parameter K
depends on the input image, though there are tricks to estimate it. Also, a simple filter like this can never perfectly undo a harsh blur like I put on here. More advanced iterative approaches are necessary to obtain a better deconvolution.
Estimating the kernel
Let's turn this around, and estimate psf
from original
and in
. It is possible to do the division directly (equivalent to the Wiener filter with K=0
), but the output is quite noisy. Where the original image has very low frequency-domain values, you'll get poor estimates. Picking the right K
leads to a better estimate of the kernel. With a K
that is too large, the result is again a poor approximation.
% Direct estimation by division
psf1 = fftshift(ifft2(fft2(in)./fft2(original)));
% Wiener filter approach
K = 1e-7;
H = fft2(original);
cH = conj(H);
HcH = H .* cH;
K = K * max(max(abs(HcH)));
w = cH ./ (HcH + K);
psf2 = fftshift(real(ifft2(w .* fft2(in))));
(zoom in to observe the noise)
Edit
I downloaded the images from the website you linked. I used the first result, without padding, and cropped the frames from the images as best as I could, to leave only the data and exactly the data:
original = imread('original.png');
original = original(33:374,120:460,1);
in = imread('blur_nopad.png');
in = in(33:374,120:460,1);
Then I used the code exactly as above, with the same K
and everything, and got a pretty good result, showing a slightly shifted Gaussian kernel.
I then repeated the same with the second result (after padding) and got a worse result, but still very recognizable as a Gaussian.