Relation between HAAR training image size and w/h parameters

475 Views Asked by At

Please help understand the relation of HAAR training positive/negative images size and training width/length parameters.

So my training images sizes are about 100x90 (both positive and negative). But if I try to train HARR classifier with –w 100 –h 90 parameters, that will not work, because for ”opencv_traincascade” command the w/h values lies somewhere around [20-25].
As a results I am creating my sample vector with w(/h)=20, and then I run traincascade using w(/h)=20 (but my images are 100x90). Does such approach right?

I can decrease my images from 100x90 to 20x20, but I am not sure how my algo will work on 100x90 or bigger images. I want to do real live camera image processing, where the image sizes are about 1000x800.

Just for reference, this is my training script:

find PosResize -name "pos_*" > pos_100x50.dat
find NegResize -name "neg_*" > neg_100x50.dat
perl createtrainsamples.pl pos_100x50.dat neg_100x50.dat samples_20x20 500  "opencv_createsamples  -bgcolor 0 -bgthresh 0 -maxxangle 0.1 -maxyangle 0.1 -maxzangle 0.1 -maxidev 40 -w 20 -h 20"
python  ./../../Tools/mergevec-master/mergevec.py -v samples_20x2 0 -o samples_20x20.vec
opencv_traincascade -data data_20x20 -vec samples_20x20.vec -w 20 -h 20 -bg neg_100x50.dat -numPos 499 -numNeg 586 -numStages 10 -featureType HAAR -mode ALL -precalcValBufSize 512 -precalcIdxBufSize 512  -minHitRate 0.999 -maxFalseAlarmRate 0.1 -maxWeakCount 1000

Thanks

0

There are 0 best solutions below