If histogram equalization is done on a poorly-contrasted image then its features become more visible. However there is also a large amount of grains/speckles/noise. using blurring functions already available in OpenCV is not desirable - i'll be doing text-detection on the image later on and the letters will get unrecognizable. So what are the preprocessing techniques that should be applied?
相关问题
- How to get the background from multiple images by
- Try to load image with Highgui.imread (OpenCV + An
- CV2 Image Error: error: (-215:Assertion failed) !s
- Is it a bug of opencv RotatedRect?
- How do I apply a perspective transform with more t
相关文章
- How do I append metadata to an image in Matlab?
- opencv fails to build with ipp support enabled
- Code completion is not working for OpenCV and Pyth
- Face unlock code in Android open source project?
- Python open jp2 medical images - Scipy, glymur
- How to compile a c++ application using static open
- On a 64 bit machine, can I safely operate on indiv
- Converting PIL Image to GTK Pixbuf
Standard blur techniques that convolve the image with a kernel (e.g. Gaussian blur, box filter, etc) act as a low-pass filter and distort the high-frequency text. If you have not done so already, try
cv::bilateralFilter()
orcv::medianBlur()
. If neither of these algorithms work, you should look into other edge-preserving smoothing algorithms.If you imagine the image as a three-dimensional space, traditional filtering replaces the value of each pixel with the weighted average of all filters in a circle centered around the pixel. Bilateral filtering does the same, but uses a three-dimensional sphere centered at the pixel. Since a well-defined edge looks like a plateau, the sphere contains only one point and the pixel value remains unchanged. You can get a more detailed explanation of the bilateral filter and some sample output here.