I am currently working on object classification problem. My objective is to use the SURF descriptors to train MLP based artificial neural network in opencv and generate a model for object classification. So far, I have achieved the following:
I am computing SURF keypoints using the following code:
vector<KeyPoint> computeSURFKeypoints(Mat image) {
SurfFeatureDetector surfdetector(400, 4, 2, true, false);
vector<KeyPoint> keypoints;
surfdetector.detect(image, keypoints);
return keypoints;
}
I compute the SURF descriptors over these keypoints using the following code:
Mat computeSURFDescriptors(Mat image, vector<KeyPoint> keypoints) {
SurfDescriptorExtractor extractor;
Mat descriptors;
extractor.compute(image, keypoints, descriptors);
return descriptors;
}
The problem which I am facing is that the size of the descriptor varies from image to image. The descriptor contains 64 elements FOR EACH FEATURE POINT. For the purpose of training the neural network, I want the size of descriptor to be fixed. For that, I am using PCA to reduce the descriptor size as follows:
Mat projection_result;
PCA pca(descriptors, Mat(), CV_PCA_DATA_AS_COL, 64);
pca.project(descriptors,projection_result);
return projection_result;
In doing so, I am able to reduce the dimensions of descriptor, but the selected feature points are not representative of the image and they result in poor matching results. How can I reduce the dimension of descriptor by retaining good feature points? Any help will be appreciated.
I was searching for something completely else, so no expert, but I happen to know that Matlab has a feature 'points.selectstrongest(x)', were x is the amount of points you want. The feature picks the points with the strongest metric.
The metric is a property given to SURFpoints by the Matlab function 'detectSURFFeatures'. I the Metric is given in 'detectSURFFeatures' by the OpenCV function 'vision.internal.buildable.fastHessianDetectorBuildable.fastHessianDetector_uint8'