Background subtraction and Optical flow for tracki

2019-05-04 17:20发布

I am working on a project to detect object of interest using background subtraction and track them using optical flow in OpenCV C++. I was able to detect the object of interest using background subtraction. I was able to implement OpenCV Lucas Kanade optical flow on separate program. But, I am stuck at how to these two program in a single program. frame1 holds the actual frame from the video, contours2are the selected contours from the foreground object.

To summarize, how do I feed the forground object obtained from Background subtraction method to the calcOpticalFlowPyrLK? Or, help me if my approach is wrong. Thank you in advance.

Mat mask = Mat::zeros(fore.rows, fore.cols, CV_8UC1);
    drawContours(mask, contours2, -1, Scalar(255), 4, CV_FILLED);

    if (first_frame)
    {
        goodFeaturesToTrack(mask, features_next, 1000, 0.01, 10, noArray(), 3, false, 0.04);
        fm0 = mask.clone();
        features_prev = features_next;
        first_frame = false;
    }
    else
    {           
        features_next.clear();
        if (!features_prev.empty())
        {
            calcOpticalFlowPyrLK(fm0, mask, features_prev, features_next, featuresFound, err, winSize, 3, termcrit, 0, 0.001);
            for (int i = 0; i < features_prev.size(); i++)
                line(frame1, features_prev[i], features_next[i], CV_RGB(0, 0, 255), 1, 8);
            imshow("final optical", frame1);
            waitKey(1);
        }
        goodFeaturesToTrack(mask, features_next, 1000, 0.01, 10, noArray(), 3, false, 0.04);
        features_prev = features_next;
        fm0 = mask.clone();         
    }

2条回答
爱情/是我丢掉的垃圾
2楼-- · 2019-05-04 17:47

Your approach of using optical flow for tracking is wrong. The idea behind optical flow approach is that a movning point in two consequtive images has at the start and endpoint the same pixel intensity. That means a motion for a feautre is estimated by observing its appearance from the start images and search for the structure in the end image (very simplified).

calcOpticalFlowPyrLK is a point tracker that means point in the previous images are tracked to the current one. Therefore the methods need the original gray valued image of your system. Because it only can estimate motion on structured / textured region ( you need x and y gradients in your image).

I think your code should do somethink like:

  1. Extract objects by background substraction (by contour) this is in the literature called a blob
  2. Extract objects in the next image and apply a blob-assoziation (which countour belong to whom) this is also called blob-tracken It is possible to do a blob-tracking with the calcOpticalFlowPyrLK. E.g. in a very simple way:
  3. Track points from the countour or a point inside the blob.
  4. Assoziation: The previous contour is one of the current if the points track, that belong to the previous contour are located at the current countour
查看更多
看我几分像从前
3楼-- · 2019-05-04 18:03

I think the output of background subtraction in OpenCV not Gray Scale image. for input Optical flow we need gray scale images.

查看更多
登录 后发表回答