I'm trying to implement the Tensorflow C API for a C++ Plugin Environment, but the segmentation results differ from the Python graph. I was told it maybe has to do something with the correct casting to float/uint8, because the resulting image seems a bit like a 3x3 grid of the correct image, but as a newby to C/C++ I don't see where exactly the error is. It works for easy classifcation tasks such as MNIST or segmentation with grayscale inputs, but doesn't work for segmentation tasks for RGB images.
We use our own environment for image representations, but it is equivalent to OpenCV Mat. I transform the image to a tensor like this:
void* tensor_data = image->Buffer().Ptr();
int64_t dims4[]={1,512,512,3};
int ndims = 4;
std::shared_ptr<TF_Tensor> tensor = TF_NewTensor(
TF_FLOAT, dims4, ndims, tensor_data, 3*512*512*sizeof(float) , noDealloc, nullptr
);
So maybe the error could be here if e.g. the RGB data is wrongly read. But I tried to segment an image with same channels, i.e. a 3D-grayscale image, but it still didn't work.
Then I run the Model, where everything should be correct, since it works for certain tasks, unless there is an error with Tensorflow.
//********* Read model
TF_Graph* Graph = TF_NewGraph();
TF_Status* Status = TF_NewStatus();
TF_SessionOptions* SessionOpts = TF_NewSessionOptions();
TF_Buffer* RunOpts = NULL;
const char* saved_model_dir = m_path.c_str(); // Path of the model
const char* tags = "serve"; // default model serving tag; can change in future
int ntags = 1;
TF_Session* Session = TF_LoadSessionFromSavedModel(SessionOpts, RunOpts, saved_model_dir, &tags, ntags, Graph, NULL, Status);
tf_utils::throw_status(Status);
//****** Get input tensor operation
int NumInputs = 1;
TF_Output* Input = (TF_Output*)malloc(sizeof(TF_Output) * NumInputs);
const std::string in_param_name = "input_op:" + std::to_string(0);
const std::string in_op_name = m_params.GetString(in_param_name.c_str(), "").c_str();
TF_Output t0 = {TF_GraphOperationByName(Graph, in_op_name.c_str()), 0};
if(t0.oper == NULL){
printf("ERROR: Failed TF_GraphOperationByName Input\n");
}
Input[0] = t0;
//********* Get Output tensor operation
int NumOutputs = 1;
TF_Output* Output = (TF_Output*)malloc(sizeof(TF_Output) * NumOutputs);
const std::string out_param_name = "output_op:" + std::to_string(0);
const std::string out_op_name = m_params.GetString(out_param_name.c_str(), "").c_str();
TF_Output t2 = {TF_GraphOperationByName(Graph, out_op_name.c_str()), 0};
if(t2.oper == NULL){
printf("ERROR: Failed TF_GraphOperationByName Output\n");
}
Output[0] = t2;
//********* Allocate data for inputs & outputs
TF_Tensor** InputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*)*NumInputs);
TF_Tensor** OutputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*)*NumOutputs);
InputValues[0] = tensor.get();
// //Run the Session
TF_SessionRun(Session, NULL, Input, InputValues, NumInputs, Output, OutputValues, NumOutputs, NULL, 0,NULL , Status);
tf_utils::throw_status(Status);
// //Free memory
TF_DeleteGraph(Graph);
TF_DeleteSession(Session, Status);
TF_DeleteSessionOptions(SessionOpts);
TF_DeleteStatus(Status);
std::shared_ptr<TF_Tensor> out_tensor(OutputValues[0], TF_DeleteTensor);
Then I convert it back to an image, where I think the error may be:
const TF_DataType tensor_type = TF_TensorType(out_tensor.get());
itwm_type = &ITWM::IMAGE_GREY_F; //Float image
// Create the image and copy the buffer.
const float* data = reinterpret_cast<float*>(TF_TensorData(out_tensor.get()));
const std::size_t byte_size = TF_TensorByteSize(out_tensor.get());
const std::size_t size = byte_size/sizeof(float);
ITWM::CImage* image = new ITWM::CImage(*itwm_type, ITWM::CSize(size));
memcpy(image->Buffer().Ptr(), data, byte_size);
I tried casting it to different formats but the error is the same or results are NaN. I also tried changing the input to three grayscale images and stack them together, but it still didn't work. I would be very thankful if you can help me find the error!
PS: Sorry that you can't run it and it's a bit messy, I copied it from three different plugins.
From comments,