I'm working into breaking an image(540x360) to 60x60 matrix referred by cv::Mat imga
in my code, then normalizing in cv::Mat imga_normalized
and finally passing each 60x60 normalized to a std::vector<cv::Mat> arrayimganorm
, which has 54 elements (540 / 60 = 9; 360 / 60 = 6; 9x6 = 54).
for (int r = 0; r < src.rows; r += 60) //src.rows = 540
{
for (int c = 0; c < src.cols; c += 60) //src.cols = 360
{
cv::Mat imga = src(cv::Range(r, (r + 60)), cv::Range(c, (c + 60))).clone();
imga.convertTo(imga_normalize, CV_32F, 1.0 / 255, 0);
arrayimganorm.push_back(imga_normalize);
}
}
After this I need to copy the data from arrayimganorm to a input tensor, but I don't know how.
I already tried this 2 ways, but they don't work as well:
- Without
memcpy
:
std::vector<cv::Mat>* data = &arrayimganorm;
for(int w = 0; w < 54; w++)
{
interpreter->typed_input_tensor<float>(0)[w] = *(data++);
}
// This gaves me there is no proper conversion function for
// std::vector<cv::Mat> to float. What I misunderstood is that
// imga_normalize it's an CV_32 and the first inputs dimension is 54.
- With
memcpy
:
for(int w = 0; w < 54; w++)
{
memcpy(interpreter->typed_input_tensor<float>(0)[w], arrayimganorm[w].data , arrayimganorm[w].total() * arrayimganorm[w].elemSize());
}
// This gives me type argument is incompatible with type parameter
model = tflite::FlatBufferModel::BuildFromBuffer(ptr, sizeof(ptr));
if (model == nullptr)
{
fprintf(stderr, "Failed to load model\n");
exit(-1);
}
tflite::ops::builtin::BuiltinOpResolver resolver;
tflite::InterpreterBuilder(*model.get(), resolver)(&interpreter);
if (interpreter == nullptr)
{
fprintf(stderr, "Failed to initiate the interpreter\n");
exit(-1);
}
interpreter->ResizeInputTensor(0, { 54, 60, 60, 1 });
// I did this because my standard input tensor has 1 x 60 x 60 x 1
// dimensions (converted .pb to tflite, which is another misunderstood
// question, maybe for other day.. but just sayin in .pb I got
// {? x 60 x 60 x 1} dimensions and the conversion got me
// 1 x 60 x 60 x 1), but I need to get faster results and invoking
// interpreter to one by one input were consuming a lot of time, so I
// changed to 54, because it's 54 60x60 matrix.
...
auto valor = interpreter->tensor(0)->dims->data[0]; //prints 54
auto height = interpreter->tensor(0)->dims->data[1]; //prints 60
auto width = interpreter->tensor(0)->dims->data[2]; //prints 60
auto channels = interpreter->tensor(0)->dims->data[3]; //prints 1
Any idea of how can I pass this array of array to the input tensor?