Clarification regarding the viola-jones face detection algorithm

31 Views Asked by At

As a side project during break i'm trying to implement the algorithm from scratch(ish) in python.

I have read the original paper,implemented my function for calculating and extracting the haar features as well as the adaboost for feature selection. But I'am uncertain if my understanding of the algorithm is 100% correct so far if anyone can guide me i would be very thankfull.

My overall understanding of the preprocessing and feature selection process is as follows:

Preprocessing

  • equal positive and negative images are aquired with annotated faces
  • the bounding boxes of the annotated images are used ot crop the face/s out of the images
  • images are converted to gray scale
  • images are variance normalized
  • images are resized to 24x24 pixels

Haar features

  • haar features are calculated through the whole subwindow of the 24x24 preprocessed image using the integral image

Feature selection

  • adaboost is used of T iteration using weakclassifiers, each classifier is trained on one single feature

Tried cropping the images according to the bb, resized the all the positives and negatives to 24x24 but alot of the features are very distinguishable i end up with a lot of classifiers with 0 error and upon inspecting the features, the margin between the positives and negatives is very significant which makes me suspect my preprocessing is wrongly implemented.

0

There are 0 best solutions below