We have created 5 Deep Learning models to be deployed to a Texas Instruments CP1352P7-1 LP MicroController. Using Tensorflow (lite) we converted to .tflite and then using the Code Compressor Studio for TI, compiled and flashed the code to the MCU. However, when running the inference with manual inputs, we are getting an error faultISR() which is an interrupt if the program faces memory issues. We are able to understand that the error is likely being caused in the following lines:
const int tensor_arena_size = 4 * 1024;
uint8_t tensor_arena[tensor_arena_size];
tflite::MicroInterpreter interpreter(model, op_resolver, tensor_arena, tensor_arena_size);
interpreter.AllocateTensors();
Specifically, interpreter.AllocateTensors() is where the error is but that is probably being caused by tensor_arena and/or tensor_arena_size that we are providing is not what is expected
The header file with the array of values for the model weights is of length 19228
unsigned int standing_not_quantized_tflite_len = 19228;
We tried changing the tensor_arena_size to different values:
4 x 32, 4 x 64, 4 x 128, 4 x 256, 4 x 512, 4 x 20000
I am not able to understand why is the memory allocation failing. Is it because mismatch of datatypes being specified, is it due to mismatch sizes? Is it allocating in bits, bytes or kilobytes? Am I trying to make calculations in kilobytes while it is expecting bytes sizes?
I dont have experience in C++ programming and this is our first project where we are deploying models to MCUs so any help will be appreciated.
Thanks.