video stabilization using opencv

2019-04-12 20:49发布

问题:

I am trying to do video stabilization with opencv(without the opencv video stabilization class).

the steps for my algo is as follows->

  1. Surf points extraction,

  2. Matching,

  3. Homography matrix,

  4. warpPerspective

And the output video is not stabilized at all :(. it just looks like the original video. I could not find and reference code for video stabilization. I followed the procedure described here . Can anybody help me out by telling me where I am going wrong or provide me some source code link to improve my algo.

Please help. Thank you

回答1:

You can use my code snippet as a start point (not very stable but seems it works):

#include "opencv2/opencv.hpp"
#include <iostream>
#include <vector>
#include <stdio.h>

using namespace cv;
using namespace std;

int main(int ac, char** av)
{
    VideoCapture capture(0);
    namedWindow("Cam");
    namedWindow("Camw");
    Mat frame;
    Mat frame_edg;
    Mat prev_frame;
    int k=0;
    Mat Transform;
    Mat Transform_avg=Mat::eye(2,3,CV_64FC1);
    Mat warped;
    while(k!=27)
    {
        capture >> frame;
        cv::cvtColor(frame,frame,cv::COLOR_BGR2GRAY);
        cv::equalizeHist(frame,frame);
        cv::Canny(frame,frame_edg,64,64);
        //frame=frame_edg.clone();
        imshow("Cam_e",frame_edg);
        imshow("Cam",frame);

        if(!prev_frame.empty())
        {
            Transform=estimateRigidTransform(frame,prev_frame,0);
            Transform(Range(0,2),Range(0,2))=Mat::eye(2,2,CV_64FC1);
            Transform_avg+=(Transform-Transform_avg)/2.0;
            warpAffine(frame,warped,Transform_avg,Size( frame.cols, frame.rows));

            imshow("Camw",warped);
        }

        if(prev_frame.empty())
        {
            prev_frame=frame.clone();
        }

        k=waitKey(20);      
    }
    cv::destroyAllWindows();
    return 0;
}

You can also look for paper: Chen_Halawa_Pang_FastVideoStabilization.pdf as I remeber there was MATLAB source code supplied.



回答2:

In your "warpAffine(frame,warped,Transform_avg,Size( frame.cols, frame.rows));" function, you must specify FLAG as WARP_INVERSE_MAP for stabilization.

Sample code I have written:

Mat src, prev, curr, rigid_mat, dst;

VideoCapture cap("test_a3.avi");

while (1)
{
    bool bSuccess = cap.read(src);
    if (!bSuccess) //if not success, break loop
        {
                cout << "Cannot read the frame from video file" << endl;
                break;
        }

    cvtColor(src, curr, CV_BGR2GRAY);

    if (prev.empty())
    {
        prev = curr.clone();
    }

    rigid_mat = estimateRigidTransform(prev, curr, false);

    warpAffine(src, dst, rigid_mat, src.size(), INTER_NEAREST|WARP_INVERSE_MAP, BORDER_CONSTANT);


    // ---------------------------------------------------------------------------//

    imshow("input", src);
    imshow("output", dst);

    Mat dst_gray;
    cvtColor(dst, dst_gray, CV_BGR2GRAY);
    prev = dst_gray.clone();

    waitKey(30);
}

Hoping this will solve your problem :)



回答3:

Surf is not so fast. the way I work is with Optical Flow. First you have to calculating good features on your first frame with the GoodFeaturesToTrack() function. After that I do some optimalisation with the FindCornerSubPix() function.

now you have the featurepoints in your startframe, the next thing you have to do is determine the optical flow. There are several Optical Flow functions but the one I use is OpticalFlow.PyrLK(), in one of the out parameters you get the featurespoints in the current frame. With that you can calculate the Homography matrix with the FindHomography() function. Next you have to do is invert this matrix, the explanation you can easily find with google, next you call the WarpPerspective() function to stabilize your frame.

PS. The functions I put here where from EmguCV, the .NET wrapper for OpenCV, so ther may be some differences