I am trying to implement the Gaussian blur. I have already computed the mask using the 2d function provided on wikipedia. I currently have a 2d matrix. I understand that in order to improve the running time, one can avoid using standard convolution methods due to the seperability of the Gaussian. In other words, as this This answer says, "In the Gaussian blur case it breaks down to two one dimensional operations".
This page has been helpful, however, I do not understand how to obtain a 1d mask from the existing 2d mask. For instance, this page converts the 2d mask in figure 3 to the 1d mask in figure 4. How does one do this conversion?
[EDIT]
Is is sufficient to compute a 1d mask to begin with, and apply it the x direction and then the y?
[EDIT 2]
I have figured out that yes, one can simply generate a 1d mask and use it on both x and y directions. The problem, however, is the quality of the resulting image when I apply my Gaussian filter.
I assume there are manipulations that need to be done when one computes the dot product of the mask and the selected section of the original image. What could cause the resultant image to be like this?
[EDIT]
In addition to @Jeremie Miserez's answer, this page was extremely helpful. It also has code if one needs to see how this is done.
This article on separable convolution should clear things up a bit.
I just recently had to do this, so here goes: As an example, we use a kernel based on a 2-dimensional Gaussian distribution with a standard deviation of 0.85. We'll want a 3x3 kernel (Matlab code).
>> h = fspecial('gaussian',3,0.85)
h =
0.0626 0.1250 0.0626
0.1250 0.2497 0.1250
0.0626 0.1250 0.0626
Note that the sum of all entries is 1, i.e. the brightness of an image will not change if you apply this as a filter:
>> sum(sum(h))
ans =
1
Also note that the rank is 1, thus the kernel is actually separable (two vectors h1
and h2
that will result in h
when multiplied: h1*h2=h
)
>> rank(h)
ans =
1
Great, we can proceed. Note that if the rank is greater than 1, you could still get an approximation, but you might need to use a different technique (see links at the end).
Without going into the details, we do singular value decomposition using the svd
function. This is a standard function, computes U*S*V'=h
and is available in many math libraries.
>> [U,S,V] = svd(h)
U =
-0.4085 0.9116 -0.0445
-0.8162 -0.3867 -0.4292
-0.4085 -0.1390 0.9021
S =
0.3749 0 0
0 0.0000 0
0 0 0.0000
V =
-0.4085 -0.3497 -0.8431
-0.8162 0.5534 0.1660
-0.4085 -0.7559 0.5115
We now know that U*S*V'=h
(V'
is the transpose of V
). Now, for a rank 1 matrix, S
should just have 1 singular value, the rest should be 0 (see discussion at the end of this answer for more on that).
So what we need now is (h1)*(h2)=h
. We can split S
in two different values so that s1*s2=S
. Thus we get: U*s1*s2*V'=h
, and then (U*s1)*(s2*V')=h
.
You can choose how to split S
however you want, but using the square root will divide S
equally between h1
and h2
:
>> h1 = U*sqrt(S)
h1 =
-0.2501 0.0000 -0.0000
-0.4997 -0.0000 -0.0000
-0.2501 -0.0000 0.0000
>> h2 = sqrt(S)*V'
h2 =
-0.2501 -0.4997 -0.2501
-0.0000 0.0000 -0.0000
-0.0000 0.0000 0.0000
Note that we don't need the extra rows/columns with zeros, so we can do it simpler like this:
>> h1 = U(:,1)*sqrt(S(1,1))
h1 =
-0.2501
-0.4997
-0.2501
>> h2 = sqrt(S(1,1))*V(:,1)'
h2 =
-0.2501 -0.4997 -0.2501
Note the minus signs cancel each other out, so you could also just remove them from h1
and h2
if you like:
h1 =
0.2501
0.4997
0.2501
h2 =
0.2501 0.4997 0.2501
>> h1*h2
ans =
0.0626 0.1250 0.0626
0.1250 0.2497 0.1250
0.0626 0.1250 0.0626
Which is (almost) the same as what we had before:
>> h1*h2 - h
ans =
1.0e-16 *
0 0.2776 -0.1388
0 0.5551 -0.2776
0 0.2776 -0.1388
Note that machine precision eps
is just:
>> eps
ans =
2.2204e-16
so the error is miniscule and in this case due to imprecision. If you have any error much larger than this, you most likely simply ignored the remaining singular values and calculated a rank-1 approximation of h
. In that case, you might want look into other options to get better approximations, e.g. this implementation or this implementation.