image processing - FernDescriptorMatch - how to use it? how it works? -


how use fern descriptor matcher in opencv? take input keypoints extracted algrithm (sift/surf?) or calculates itself?

edit:
i'm trying apply database of images

fernmatcher->add(all_images, all_keypoints); fernmatcher->train(); 

there 20 images, in total less 8mb, extract keypoints using surf. memory usage jumps 2.6gb , training takes knows how long...

fern not different rest of matchers. here sample code using fern key point descriptor matcher.

int octaves= 3; int octavelayers=2; bool upright=false; double hessianthreshold=0; std::vector<keypoint> keypoints_1,keypoints_2; surffeaturedetector detector1( hessianthreshold, octaves, octavelayers, upright ); detector1.detect( image1, keypoints_1 ); detector1.detect( image2, keypoints_2 ); std::vector< dmatch > matches; ferndescriptormatcher matcher; matcher.match(image1,keypoints_1,image2,keypoints_2,matches); mat img_matches; drawmatches( templat_img, keypoints_1,tempimg, keypoints_2,matches,  img_matches,scalar::all(-1), scalar::all(-1),vector<char>(), drawmatchesflags::not_draw_single_points); imshow( "fern matches", img_matches); waitkey(0); 

*but suggestion use fast faster compared fern , fern can used train set of images keypoints , trained fern can used classifier other.


Comments

Popular posts from this blog

django - How can I change user group without delete record -

java - Need to add SOAP security token -

java - EclipseLink JPA Object is not a known entity type -