OpenCV: how to categorize GMM calculated probs

2019-08-22 11:03发布

I am using opencv EM algorithm to obtain GMM models with the help of example code in opencv documentation as follows:

cv::Mat capturedFrame
const int N = 5; 
int nsamples = 100;
cv::Mat samples ( nsamples, 2, CV_32FC1 );
samples = samples.reshape ( 2, 0 );
cv::Mat sample ( 1, 2, CV_32FC1 );
CvEM em_model;
CvEMParams params;

for ( i = 0; i < N; i++ )
{           
//from the training samples
cv::Mat samples_part = samples.rowRange ( i*nsamples/N, (i+1)*nsamples/N);
cv::Scalar mean (((i%N)+1)*img.rows/(N1+1),((i/N1)+1)*img.rows/(N1+1));
cv::Scalar sigma (30,30);
cv::randn(samples_part,mean,sigma);                     

}
samples = samples.reshape ( 1, 0 );
//initialize model parameters
params.covs         = NULL;
params.means        = NULL;
params.weights      = NULL;
params.probs        = NULL;
params.nclusters    = N;
params.cov_mat_type = CvEM::COV_MAT_SPHERICAL;
params.start_step   = CvEM::START_AUTO_STEP;
params.term_crit.max_iter = 300;
params.term_crit.epsilon  = 0.1;
params.term_crit.type   = CV_TERMCRIT_ITER|CV_TERMCRIT_EPS;     
//cluster the data
em_model.train ( samples, Mat(), params, &labels );

As being a fresh to GMM and openCV, now I have some questions:

Firstly, after performing above code, I can get the probs like:

cv::Mat probs = em_model.getProbs();

Then how can I get the models which are having the most and least elements, that is, the biggest and smallest models?

Secondly, my sample data is only 100 here, as in the example code of opencv, but I am reading a frame with size 600x800, and I want to sample all those pixels in it, which is 480000. But it takes about 10 ms for these 100 samples, that means it would be too much slow if I set:

int nsamples = 480000;

Am I on the right way here?

1条回答
forever°为你锁心
2楼-- · 2019-08-22 11:20

If I get your question right, what you call your "biggest" and "smallest" models refers to the weights of each gaussian in the mixture. You can get the weights associated to the gaussians using EM::getWeights.

Concerning second question, if you train your model using 480000 samples instead of 100, yes, it will be definitely longer. Being "too slow" depends on your requirements. But EM is a classification model, so what is usually done is that you must train the model, using a sufficient amount of sample. This is a long process, but usually done "offline". Then, you can use the model to "predict" new samples, i.e. get the probabilities associated with new input samples. When you call getProbs() function, you get the probabilities associated with your training samples. If you want to get probabilities for unknown samples, typically pixels in your video frame, call the function predict.

查看更多
登录 后发表回答