I'm trying to convert the following code from matlab to c++
function data = process(data)
data = medfilt2(data, [7 7], 'symmetric');
mask = fspecial('gaussian', [35 35], 12);
data = imfilter(data, mask, 'replicate', 'same');
maximum = max(data(:));
data = 1 ./ ( data/maximum );
data(data > 10) = 16;
end
my problem in the medfilt2 - which is a 2D median filter, I need it to support 10 bits per pixels and more images.
1. I have looked into openCV it has a 5x5 median filter which supports 16 bits, but 7x7 only supports bytes.
http://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html?highlight=medianblur#medianblur
2.I have also,look into intel IPP but i can see only a 1D median filter https://software.intel.com/en-us/node/502283
Is there a fast implementation for 2D filter?
looking for something like:
- Fast Median Search: An ANSI C Implementation using parallel programming and vectorized (AVX/SSE) operations...
- Two Dimensional Digital Signal Processing II. Transforms and median filters. Edited by T.S.Huang. Springer-Verlag. 1981.
There are more code examples in FAST MEDIAN FILTERING WITH IMPLEMENTATIONS IN C/C++/C#/VB.NET/Delphi.
I also found Median Filtering in Constant Time.
Motivated by the fact that OpenCV does not implement 16-bit median filter for large kernel sizes (larger than 5), I tried three different strategies.
All of them are based on Huang's [2] sliding window algorithm. That is, the histogram is updated by removing and inserting pixel entries as the window slides from left to right. This is quite straightforward for 8-bit image and already implemented in OpenCV. However, a large 65536 bin histogram makes computation a bit difficult.
I used the
algorithm
C++ standard library where applicable, and did not implement Weiss' additional optimization strategies.1) A naive sorting implementation. I think this is the best starting point for arbitrary pixel type (floats particularly).
2) A sparse histogram. We wouldn't want to step over 65536 bins to find the median of each pixel, so how about storing the sparse histogram then? Again, this is suitable for all pixel types, but it doesn't make sense if all pixels in the window are different (e.g. floats).
3) A dense histogram. So this is the dense histogram, but instead of a simple 65536 array, we make searching a little easier by dividing it into sub-bins e.g.:
This makes insertion a bit slower (by constant time), but search a lot faster. I found 16 a good number.
Figure I tested methods (1) red, (2) blue and (3) black against each other and 8bpp OpenCV (green). For all but OpenCV, the input image is 16-bpp gray scale. The dotted lines are truncated at dynamic range [0,255] and smooth lines are truncated at [0, 8020] ( via multiplication by 16 and smoothing to add more variance on pixel values).
Interesting is the divergence of sparse histogram as the variance of pixel values increases. Nth-element is safe bet always, OpenCV is the fastest (if 8bpp is ok) and the dense histogram is trailing behind.
I used Windows 7, 8 x 3.4 GHz and Visual Studio v. 10. Mine were running multithreaded, OpenCV implementation is single-threaded. Input image size 2136x3201 (http://i.imgur.com/gg9Z2aB.jpg, from Vogue).
[2]: Huang, T: "Two-Dimensional Signal Processing II: Transforms and Median Filters", 1981
[3]: Weiss, B: "Fast median and Bilateral Filtering", 2006
I found this online, it is the same algorithm which OpenCV has, however extended to 16 bit and optimized to SSE.
https://github.com/aoles/EBImage/blob/master/src/medianFilter.c