Remove glare from photo opencv

2020-06-02 05:17发布

问题:

So, im using opencv to capture a document, scan it and crop it. When there is no lighting in the room, it works perfectly. When there is some light in the room, and there is a glare on the table and the document is near it, it also grabs the glare as part of the rectangle.

How can one remove the glare from the photo?

Here is the code im using to get the image I want:

 Mat &image = *(Mat *) matAddrRgba;
    Rect bounding_rect;

    Mat thr(image.rows, image.cols, CV_8UC1);
    cvtColor(image, thr, CV_BGR2GRAY); //Convert to gray
    threshold(thr, thr, 150, 255, THRESH_BINARY + THRESH_OTSU); //Threshold the gray

    vector<vector<Point> > contours; // Vector for storing contour
    vector<Vec4i> hierarchy;
    findContours(thr, contours, hierarchy, CV_RETR_CCOMP,
                 CV_CHAIN_APPROX_SIMPLE); // Find the contours in the image
    sort(contours.begin(), contours.end(),
         compareContourAreas);            //Store the index of largest contour
    bounding_rect = boundingRect(contours[0]);

    rectangle(image, bounding_rect, Scalar(250, 250, 250), 5);

Here is a photo of the glare im talking about:

The things I have found are to use inRange, find the apropriate scalar for color and us inpaint to remove light. Here is a code snippet of that, but it always crashes saying it needs 8bit image with chanels.

Mat &image = *(Mat *) matAddrRgba;

    Mat hsv, newImage, inpaintMask;
    cv::Mat lower_red_hue_range;
    inpaintMask = Mat::zeros(image.size(), CV_8U);
    cvtColor(image, hsv, COLOR_BGR2HSV);
    cv::inRange(hsv, cv::Scalar(0, 0, 215, 0), cv::Scalar(180, 255, 255, 0),
                lower_red_hue_range);
    image = lower_red_hue_range;

    inpaint(image, lower_red_hue_range, newImage, 3, INPAINT_TELEA);

回答1:

I have dealt with this problem before, and change in lighting is always a problem in Computer Vision for detection and description of images. I actually trained a classifier, for HSV color spaces instead of RGB/BGR, which was mapping the image with changing incident light to the one which doesn't have the sudden brightness/dark patches (this would be the label). This worked for me quite well, however, the images were always of the same background (I don't know if you also have this).

Of course, machine learning can solve the problem but it might be an overkill. While I was doing the above mentioned, I came across CLAHE which worked pretty well with for local contrast enhancement. I suggest you to try this before detecting contours. Additionally, you might want to work on a different color space, such as HSV/Lab/Luv instead of RGB/BGR for this purpose. You can apply CLAHE separately to each channel and then merge them.

Let me know if you need some other information. I implemented this with your image in python, it works pretty nicely, but I would leave the coding to you. I might update the results I got after a couple of days (hoping that you get them first ;) ). Hope it helps.