I'm trying to run OpenVINO Inference Engine sample validation_app. According to OpenVINO IE sample: Validation_app , I've prepared my dataset like this:
<path>/dataset
/0/image0.bmp
/1/image1.bmp
When I run validation_app with:
./validation_app -i /home/chan/Desktop/predict_inceptionV3/dataset -m '/home/chan/Desktop/pbModel/IRmodel/PredictModel.xml'
warning comes out:
[ INFO ] InferenceEngine:
API version ............ 1.4
Build .................. 19154
[ INFO ] Parsing input parameters
[ INFO ] Loading plugin
API version ............ 1.5
Build .................. lnx_20181004
Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Device: CPU
[ INFO ] Collecting labels
[ INFO ] Starting inference
[ INFO ] Inference report:
Network load time: 57.1487ms
Model: /home/chan/Desktop/pbModel/IRmodel/PredictModel.xml
Model Precision: FP32
Batch size: 1
Validation dataset: /home/chan/Desktop/predict_inceptionV3/dataset
Validation approach: Classification network
[ WARNING ] No images processed
Considering the .bmp image is in 700x460 and the input shape of the .xml file is 1x3x299x299, I tried:
./validation_app -i /home/chan/Desktop/predict_inceptionV3/dataset -m '/home/chan/Desktop/pbModel/IRmodel/PredictModel.xml' --ppType ResizeCrop --ppWidth 299 --ppHeight 299
But the WARNING is still the same.
Does anyone know how to fix it?
Instead of using folder structure - put your images into a single folder and create a text file with content:
image0.bmp 0 image1.bmp 1
use path to file as -i argument for validation app.
If you creating file inside a folder with images - use just image names, if outside this folder - use full path.