Possible SDL_Net issue and Using openCV to display a YUV camera frame from memory

175 Views Asked by At

I am having issues with getting an image from the camera on a raspberry pi over a network and on to a pandaboard(running ubuntu 12.04) to display correctly. The data I get from the camera is the raw YUV data at 1280x720 resolution.

I think my SDL calls are fine, but here is the send code. Anyone feel free to point out if they can see something clearly wrong.

void Client::SendData(const void* buffer, int bufflen)
{
     /*
      Some code to check if connected to server and if socket is not null
     */



     if(SDLNet_TCP_Send(clientSocket, buffer, bufflen) < bufflen)
     {
         std::cerr << "SDLNet_TCP_Send: " << SDLNet_GetError() << std::endl;
         return;
     }
}

Now the recieve code

void Server::ReceiveDataFromClient()
{
    /*
        code to check if data is being sent
    */
   //1382400 is the size of the image in bytes, before it is sent. This data 
   //is in bufflen in the send func and, to my knowledge, is correct. 
   if(SDLNet_TCP_Recv(clientSocket, buffer, 1382400) <=0)
   {
       std::cout << "Client disconnected" << std::endl;
       /*Code to shut down socket and socketset.*/
   }
   else //client is sending data
   {
       //buffer is an int* at the moment, I have tried it as a uint8_t* and a char*
       setUpOpenCVToDisplayChunk(buffer);
   }
}

So, I take buffer directly from Recv, which should only finish when Recv has got all the data from a single send as far as I know. I therefore think that code is fine, but its here incase anyone can spot any issues as I am struggling with this issue at the moment.

Lastly, my openCv display code:

void Server::setUpOpenCVToDisplayChunk(int* data)
{
    //I have tried different bit depths also
    IplImage* yImageHeader = cvCreateImageHeader(cvSize(1280, 720), IPL_DEPTH_8U, 1);

    //code to check yImage header is created correctly
    cvSetData(yImageHeader, data, yImageHeader->widthStep);
    cvNamedWindow("win1", CV_WINDOW_AUTOSIZE);
    cvShowImage("win1", yImageHeader);
}

Sorry for all the "code here to do this" parts, I am manually typing the code out.

So, can anyone state what could be the issue at either of these parts? There is no error, I just get muddled up images, which I can notice are images, just wrongly put together or not full images.

Anyone needs more info just ask or more code I will put it up. Cheers.

1

There are 1 best solutions below

3
On

Try converting the frames from YUV to RGB. http://en.wikipedia.org/wiki/YUV lists how YUV formatted data is converted to RGB. You might also find readily available code to do that. Check the format of YUV data output from your camera and use the correct transformation.