Optical flow ignore sparse motions

2020-03-01 04:57发布

问题:

We're actually working on an image analysis project where we need to identify the objects disappeared/appeared in a scene. Here are 2 images, one captured before an action has been made by the surgeon and the other afterwards.

BEFORE: AFTER:

First, we just calculated the difference between the 2 images and here is the result (Note that I added 128 to the result Mat just to have a nicer image):

(AFTER - BEFORE) + 128

The goal is to detect that the cup (red arrow) has disappeared from the scene and the syringe (black arrow) has entered into the scene, by other words we should detect ONLY the regions that correspond to objects left/entered in the scene. Also, it's obvious that the objects in top left of the scene shifted a bit from their initial position. I thought about Optical flow so I used OpenCV C++ to calculate the Farneback's one in order to see if it's enough for our case and here is the result we got, followed by the code we wrote:

FLOW:

void drawOptFlowMap(const Mat& flow, Mat& cflowmap, int step, double, const Scalar& color)
{
    cout << flow.channels() << " / " << flow.rows << " / " << flow.cols << endl;
    for(int y = 0; y < cflowmap.rows; y += step)
        for(int x = 0; x < cflowmap.cols; x += step)
        {
            const Point2f& fxy = flow.at<Point2f>(y, x);
            line(cflowmap, Point(x,y), Point(cvRound(x+fxy.x), cvRound(y+fxy.y)), color);
            circle(cflowmap, Point(x,y), 1, color, -1);
        }
}

void MainProcessorTrackingObjects::diffBetweenImagesToTestTrackObject(string pathOfImageCaptured, string pathOfImagesAfterOneAction, string pathOfResultsFolder)
{
    //Preprocessing step...

    string pathOfImageBefore = StringUtils::concat(pathOfImageCaptured, imageCapturedFileName);
    string pathOfImageAfter = StringUtils::concat(pathOfImagesAfterOneAction, *it);

    Mat imageBefore = imread(pathOfImageBefore);
    Mat imageAfter = imread(pathOfImageAfter);

    Mat imageResult = (imageAfter - imageBefore) + 128;
    //            absdiff(imageAfter, imageBefore, imageResult);
    string imageResultPath = StringUtils::stringFormat("%s%s-color.png",pathOfResultsFolder.c_str(), fileNameWithoutFrameIndex.c_str());
    imwrite(imageResultPath, imageResult);

    Mat imageBeforeGray, imageAfterGray;
    cvtColor( imageBefore, imageBeforeGray, CV_RGB2GRAY );
    cvtColor( imageAfter, imageAfterGray, CV_RGB2GRAY );

    Mat imageResultGray = (imageAfterGray - imageBeforeGray) + 128;
    //            absdiff(imageAfterGray, imageBeforeGray, imageResultGray);
    string imageResultGrayPath = StringUtils::stringFormat("%s%s-gray.png",pathOfResultsFolder.c_str(), fileNameWithoutFrameIndex.c_str());
    imwrite(imageResultGrayPath, imageResultGray);


    //*** Compute FarneBack optical flow
    Mat opticalFlow;
    calcOpticalFlowFarneback(imageBeforeGray, imageAfterGray, opticalFlow, 0.5, 3, 15, 3, 5, 1.2, 0);

    drawOptFlowMap(opticalFlow, imageBefore, 5, 1.5, Scalar(0, 255, 255));
    string flowPath = StringUtils::stringFormat("%s%s-flow.png",pathOfResultsFolder.c_str(), fileNameWithoutFrameIndex.c_str());
    imwrite(flowPath, imageBefore);

    break;
}

And to know how much accurate this optical flow is, I wrote this small piece of code which calculates (IMAGEAFTER + FLOW) - IMAGEBEFORE:

//Reference method just to see the accuracy of the optical flow calculation
Mat accuracy = Mat::zeros(imageBeforeGray.rows, imageBeforeGray.cols, imageBeforeGray.type());

strinfor(int y = 0; y < imageAfter.rows; y ++)
for(int x = 0; x < imageAfter.cols; x ++)
{
     Point2f& fxy = opticalFlow.at<Point2f>(y, x);
     uchar intensityPointCalculated = imageAfterGray.at<uchar>(cvRound(y+fxy.y), cvRound(x+fxy.x));
     uchar intensityPointBefore = imageBeforeGray.at<uchar>(y,x);
     uchar intensityResult = ((intensityPointCalculated - intensityPointBefore) / 2) + 128;
     accuracy.at<uchar>(y, x) = intensityResult;
}
validationPixelBased = StringUtils::stringFormat("%s%s-validationPixelBased.png",pathOfResultsFolder.c_str(), fileNameWithoutFrameIndex.c_str());
 imwrite(validationPixelBased, accuracy);

The intent of having this ((intensityPointCalculated - intensityPointBefore) / 2) + 128; is just to have a comprehensible image.

IMAGE RESULT:

Since it detects all the regions that have been shifted/entered/left the scene, we think the OpticalFlow is not enough to detect just the regions representing the objects disappeared/appeared in the scene. Is there any way to ignore the sparse motions detected by opticalFlow? Or is there any alternative way to detect what we need?

回答1:

Let's say the goal here is to identify the regions with appeared/disappeared objects, but not the ones which are present in both pictures but just moved positions.

Optical flow should be a good way to go, as you have already done. However the issue is how the outcome is evaluated. As opposed to pixel-to-pixel diff which shows has no tolerance to rotation/scaling variances, you can do a feature matching (SIFT etc. Check out here for what you can use with opencv)

Here's what I get with Good Features To Track from your image before.

GoodFeaturesToTrackDetector detector;
vector<KeyPoint> keyPoints;
vector<Point2f> kpBefore, kpAfter;
detector.detect(imageBefore, keyPoints);

Instead of dense optical flow, you could use a sparse flow and track only the features,

vector<uchar> featuresFound;
vector<float> err;
calcOpticalFlowPyrLK(imageBeforeGray, imageAfterGray, keyPointsBefore, keyPointsAfter, featuresFound, err, Size(PATCH_SIZE , PATCH_SIZE ));

Output includes FeaturesFound and Error values. I simply used a threshold here to distinguish moved features and unmatched disappeared ones.

vector<KeyPoint> kpNotMatched;
for (int i = 0; i < kpBefore.size(); i++) {
    if (!featuresFound[i] || err[i] > ERROR_THRESHOLD) {
        kpNotMatched.push_back(KeyPoint(kpBefore[i], 1));
    }
}
Mat output;
drawKeypoints(imageBefore, kpNotMatched, output, Scalar(0, 0, 255));  

The remaining incorrectly matched features can be filtered out. Here I used simple mean filtering plus thresholding to get the mask of the newly appeared region.

Mat mask = Mat::zeros(imageBefore.rows, imageBefore.cols, CV_8UC1);
for (int i = 0; i < kpNotMatched.size(); i++) {
    mask.at<uchar>(kpNotMatched[i].pt) = 255;
}
blur(mask, mask, Size(BLUR_SIZE, BLUR_SIZE));
threshold(mask, mask, MASK_THRESHOLD, 255, THRESH_BINARY);

And then finding its convex hull to show the region in the original image (in yellow).

vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( mask, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );

vector<vector<Point> >hull( contours.size() );
for( int i = 0; i < contours.size(); i++ ) {
    convexHull(Mat(contours[i]), hull[i], false);
}
for( int i = 0; i < contours.size(); i++ ) {
    drawContours( output, hull, i, Scalar(0, 255, 255), 3, 8, vector<Vec4i>(), 0, Point() );
}

And simply do it in the reverse way(matching from imageAfter to imageBefore) to get the regions appeared. :)



回答2:

Here's what I tried;

  • Detect regions that have undergone a change. For this I use simple frame differencing, thresholding, morphological operations and convexhull.
  • Find feature points of these regions in both images and see if they match. Good match in a region indicates that it hasn't undergone a significant change. Bad match means the two regions are now different. For this I use BOW and Bhattacharyya distance.

The parameters may need tuning. I've used values that just worked for the two sample images. As feature detector/descriptor I've used SIFT (non-free). You can try other detectors and descriptors.

Diference Image:

Regions:

Changes (Red: insertion/removal, Yellow: sparse motion):

// for non-free modules SIFT/SURF
cv::initModule_nonfree();

Mat im1 = imread("1.png");
Mat im2 = imread("2.png");

// downsample
/*pyrDown(im1, im1);
pyrDown(im2, im2);*/

Mat disp = im1.clone() * .5 + im2.clone() * .5;
Mat regions = Mat::zeros(im1.rows, im1.cols, CV_8U);

// gray scale
Mat gr1, gr2;
cvtColor(im1, gr1, CV_BGR2GRAY);
cvtColor(im2, gr2, CV_BGR2GRAY);
// simple frame differencing
Mat diff;
absdiff(gr1, gr2, diff);
// threshold the difference to obtain the regions having a change
Mat bw;
adaptiveThreshold(diff, bw, 255, CV_ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY_INV, 15, 5);
// some post processing
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
morphologyEx(bw, bw, MORPH_CLOSE, kernel, Point(-1, -1), 4);
// find contours in the change image
Mat cont = bw.clone();
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(cont, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, Point(0, 0));
// feature detector, descriptor and matcher
Ptr<FeatureDetector> featureDetector = FeatureDetector::create("SIFT");
Ptr<DescriptorExtractor> descExtractor = DescriptorExtractor::create("SIFT");
Ptr<DescriptorMatcher> descMatcher = DescriptorMatcher::create("FlannBased");

if( featureDetector.empty() || descExtractor.empty() || descMatcher.empty() )
{
    cout << "featureDetector or descExtractor or descMatcher was not created" << endl;
    exit(0);
}
// BOW
Ptr<BOWImgDescriptorExtractor> bowExtractor = new BOWImgDescriptorExtractor(descExtractor, descMatcher);

int vocabSize = 10;
TermCriteria terminate_criterion;
terminate_criterion.epsilon = FLT_EPSILON;
BOWKMeansTrainer bowTrainer( vocabSize, terminate_criterion, 3, KMEANS_PP_CENTERS );

Mat mask(bw.rows, bw.cols, CV_8U);
for(size_t j = 0; j < contours.size(); j++)
{
    // discard regions that a below a specific threshold
    Rect rect = boundingRect(contours[j]);
    if ((double)(rect.width * rect.height) / (bw.rows * bw.cols) < .01)
    {
        continue; // skip this region as it's too small
    }
    // prepare a mask for each region
    mask.setTo(0);
    vector<Point> hull;
    convexHull(contours[j], hull);
    fillConvexPoly(mask, hull, Scalar::all(255), 8, 0);

    fillConvexPoly(regions, hull, Scalar::all(255), 8, 0);

    // extract keypoints from the region
    vector<KeyPoint> im1Keypoints, im2Keypoints;
    featureDetector->detect(im1, im1Keypoints, mask);
    featureDetector->detect(im2, im2Keypoints, mask);
    // get their descriptors
    Mat im1Descriptors, im2Descriptors;
    descExtractor->compute(im1, im1Keypoints, im1Descriptors);
    descExtractor->compute(im2, im2Keypoints, im2Descriptors);

    if ((0 == im1Keypoints.size()) || (0 == im2Keypoints.size()))
    {
        // mark this contour as object arrival/removal region
        drawContours(disp, contours, j, Scalar(0, 0, 255), 2);
        continue;
    }

    // bag-of-visual-words
    Mat vocabulary = bowTrainer.cluster(im1Descriptors);
    bowExtractor->setVocabulary( vocabulary );
    // get the distribution of visual words in the region for both images
    vector<vector<int>> idx1, idx2;
    bowExtractor->compute(im1, im1Keypoints, im1Descriptors, &idx1);
    bowExtractor->compute(im2, im2Keypoints, im2Descriptors, &idx2);
    // compare the distributions
    Mat hist1 = Mat::zeros(vocabSize, 1, CV_32F);
    Mat hist2 = Mat::zeros(vocabSize, 1, CV_32F);

    for (int i = 0; i < vocabSize; i++)
    {
        hist1.at<float>(i) = (float)idx1[i].size();
        hist2.at<float>(i) = (float)idx2[i].size();
    }
    normalize(hist1, hist1);
    normalize(hist2, hist2);
    double comp = compareHist(hist1, hist2, CV_COMP_BHATTACHARYYA);

    cout << comp << endl;
    // low BHATTACHARYYA distance means a good match of features in the two regions
    if ( comp < .2 )
    {
        // mark this contour as a region having sparse motion
        drawContours(disp, contours, j, Scalar(0, 255, 255), 2);
    }
    else
    {
        // mark this contour as object arrival/removal region
        drawContours(disp, contours, j, Scalar(0, 0, 255), 2);
    }
}


回答3:

You could try a two pronged approach - Using the image difference method is great at detecting objects which enter and exit the scene, so long as the colour of the object is different to the colour of the background. What strikes me is that it would be improved greatly if you could remove the objects that have moved before using the method.

There is a great OpenCV method for object detection here which finds points of interest in an image for detecting translation of an object. I think you could achieve what you want with the following method -

1 Compare images with the OpenCV code and highlight moving objects in both images

2 Colour in the detected objects with background the other picture at the same set of pixels (or something similar) to reduce the difference in images which is caused by moving images

3 Find the image difference which should now have large major objects and smaller artefacts left over from the moving images

4 Threshold for a certain size of object detected in image difference

5 Compile a list of likely candidates

There are other alternatives for object tracking, so there may be code you like more but the process should be okay for what you are doing, I think.