I´m trying to make an implementation of Gaussian blur for a school project. I need to make both a CPU and a GPU implementation to compare performance.
I am not quite sure that I understand how Gaussian blur works. So one of my questions is if I have understood it correctly?
Heres what I do now: I use the equation from wikipedia http://en.wikipedia.org/wiki/Gaussian_blur to calculate the filter. For 2d I take RGB of each pixel in the image and apply the filter to it by multiplying RGB of the pixel and the surrounding pixels with the associated filter position. These are then summed to be the new pixel RGB values. For 1d I apply the filter first horizontally and then vetically, which should give the same result if I understand things correctly. Is this result exactly the same result as when the 2d filter is applied?
Another question I have is about how the algorithm can be optimized. I have read that the Fast Fourier Transform is applicable to Gaussian blur. But I can't figure out how to relate it. Can someone give me a hint in the right direction?
Thanks.