The Documentation of OpenCV mentions the function "train()" within a DescriptorMatcher.
"virtual void cv::cuda::DescriptorMatcher::train ( )
pure virtual
Trains a descriptor matcher.
Trains a descriptor matcher (for example, the flann index). In all methods to match, the method train() is run every time before matching."(docs)
That's all is said there. Does someone know hot it works? Especially what the DescriptorMatcher needs to train itself. A short example in some OOP language would be amazing.
Here is the Link to the documentation:
http://docs.opencv.org/master/dd/dc5/classcv_1_1cuda_1_1DescriptorMatcher.html#ab220b434f827962455f430a12c65c074
Thanks in advance
You can see the matchers code here
Trains a descriptor matcher (for example, the flann index). In all methods to match, the method train() is run every time before matching.
Yes, as you can see from the code, train()
is called in the matching functions.
void DescriptorMatcher::knnMatch( InputArray queryDescriptors, std::vector<std::vector<DMatch> >& matches, int knn,
InputArrayOfArrays masks, bool compactResult )
{
if( empty() || queryDescriptors.empty() )
return;
CV_Assert( knn > 0 );
checkMasks( masks, queryDescriptors.size().height );
train();
knnMatchImpl( queryDescriptors, matches, knn, masks, compactResult );
}
void DescriptorMatcher::radiusMatch( InputArray queryDescriptors, std::vector<std::vector<DMatch> >& matches, float maxDistance,
InputArrayOfArrays masks, bool compactResult )
{
matches.clear();
if( empty() || queryDescriptors.empty() )
return;
CV_Assert( maxDistance > std::numeric_limits<float>::epsilon() );
checkMasks( masks, queryDescriptors.size().height );
train();
radiusMatchImpl( queryDescriptors, matches, maxDistance, masks, compactResult );
}
When you call match()
, it will call in fact knnMatch
with knn = 1
void DescriptorMatcher::match( InputArray queryDescriptors, std::vector<DMatch>& matches, InputArrayOfArrays masks )
{
std::vector<std::vector<DMatch> > knnMatches;
knnMatch( queryDescriptors, knnMatches, 1, masks, true /*compactResult*/ );
convertMatches( knnMatches, matches );
}
The base implementation of train()
does nothing:
void DescriptorMatcher::train()
{}
Only FlannBasedMatcher
overload train()
:
void FlannBasedMatcher::train()
{
if( !flannIndex || mergedDescriptors.size() < addedDescCount )
{
// FIXIT: Workaround for 'utrainDescCollection' issue (PR #2142)
if (!utrainDescCollection.empty())
{
CV_Assert(trainDescCollection.size() == 0);
for (size_t i = 0; i < utrainDescCollection.size(); ++i)
trainDescCollection.push_back(utrainDescCollection[i].getMat(ACCESS_READ));
}
mergedDescriptors.set( trainDescCollection );
flannIndex = makePtr<flann::Index>( mergedDescriptors.getDescriptors(), *indexParams );
}
}
For an example on how to use FlannBasedMatcher
you can refer to OpenCV doc example
You can refer to this answer to know what is done in the training phase. In short, you're building the index for the matcher. You can find source code here