I am using OpenCV to prepare images for OCR from an iPhone camera, and I have been having trouble getting the results I need for an accurate OCR scan. Here is the code I am using now.
cv::cvtColor(cvImage, cvImage, CV_BGR2GRAY);
cv::medianBlur(cvImage, cvImage, 0);
cv::adaptiveThreshold(cvImage, cvImage, 255, CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY, 5, 4);
This method takes a bit too long and does not provide me good results.
Any suggestions on how I could make this more effective? The images are coming from an iPhone camera.
After using Andry's suggestion.
cv::Mat cvImage = [self cvMatFromUIImage:image];
cv::Mat res;
cv::cvtColor(cvImage, cvImage, CV_RGB2GRAY);
cvImage.convertTo(cvImage,CV_32FC1,1.0/255.0);
CalcBlockMeanVariance(cvImage,res);
res=1.0-res;
res=cvImage+res;
cv::threshold(res,res, 0.85, 1, cv::THRESH_BINARY);
cv::resize(res, res, cv::Size(res.cols/2,res.rows/2));
image = [self UIImageFromCVMat:cvImage];
Method:
void CalcBlockMeanVariance(cv::Mat Img,cv::Mat Res,float blockSide=21) // blockSide - the parameter (set greater for larger font on image)
{
cv::Mat I;
Img.convertTo(I,CV_32FC1);
Res=cv::Mat::zeros(Img.rows/blockSide,Img.cols/blockSide,CV_32FC1);
cv::Mat inpaintmask;
cv::Mat patch;
cv::Mat smallImg;
cv::Scalar m,s;
for(int i=0;i<Img.rows-blockSide;i+=blockSide)
{
for (int j=0;j<Img.cols-blockSide;j+=blockSide)
{
patch=I(cv::Rect(j,i,blockSide,blockSide));
cv::meanStdDev(patch,m,s);
if(s[0]>0.01) // Thresholding parameter (set smaller for lower contrast image)
{
Res.at<float>(i/blockSide,j/blockSide)=m[0];
}else
{
Res.at<float>(i/blockSide,j/blockSide)=0;
}
}
}
cv::resize(I,smallImg,Res.size());
cv::threshold(Res,inpaintmask,0.02,1.0,cv::THRESH_BINARY);
cv::Mat inpainted;
smallImg.convertTo(smallImg,CV_8UC1,255);
inpaintmask.convertTo(inpaintmask,CV_8UC1);
inpaint(smallImg, inpaintmask, inpainted, 5, cv::INPAINT_TELEA);
cv::resize(inpainted,Res,Img.size());
Res.convertTo(Res,CV_32FC1,1.0/255.0);
}
Any idea why I am getting this result? The OCR results are pretty good, but would be better if I could get an image similar to the one you got. I am developing for iOS if that matters. I had to use cvtColor
because the method expects a single channel image.
As the light is almost in uniform, and the foreground is easily distinguished with the background. So I think just directly threshold (using OTSU) is ok for OCR. (Almost the same with @Andrey's answer in text regions).
OpenCV 3 Code in Python:
Here is my result:
Here is the code:
JAVA CODE: A long time has passed since this question was made, but I've rewritten this code from C++ to Java in case someone will need it (I needed to use it for developing an app on android studio).