Context

I have been creating a system where an raspberry PI is sending images to a remote client in real-time. The raspberry PI captures the images using a raspberry PI camera. A captured image is available as a 3-dimensional array of all the pixels (rows, colums and rgb). By sending and displaying the images really fast it will appear as a video to the user.

My goal is to send these images in real-time with the image resolution being as high as possible. An acceptable frame rate is around 30 fps. I selected the protocol UDP and not TCP. I did this because data can be transferred much faster in UDP due to less overhead. Re-transmissions of individual packets is not necessary because losing some pixels is acceptable in my case. The raspberry PI and the client are located in the same network so not many packets will be dropped anyway.

Taking into account that the maximum transmission unit (MTU) on the ethernet layer is 1500 bytes, and the UDP packets should not be fragmented or dropped, I selected a maximum payload length of 1450 bytes, of which 1447 bytes are data, and 3 bytes are application layer overhead. The remaining 50 bytes are reserved for overhead that is automatically added by the TCP/IP and transport layers.

I mentioned that captured images are available as an array. Assuming the size of this array is, for example, 1.036.800 bytes (e.g. width=720 * height=480 * numberOfColors=3), then 717 (1.036.800 / 1447) UDP packets are needed to send the entire array. The c++ application on the raspberry PI does this by fragmenting the array into fragments of 1447 bytes, and adding an fragment index number, which is between 1-717, as overhead to the packet. We also add an image number, to distinguish from a previously sent image/array. The packet looks like this: udp packet

Problem

On the client side, I developed a C# application that receives all the packets and reassembles the array using the included index numbers. Using the EgmuCV library, the received array is converted to an image and drawn in a GUI. However, some of the received images are drawn with black lines/chunks. When debugging, I discovered that this problem is not caused by drawing the image, but the black chunks are actually missing array fragments that did never arrive. Because the byte values in an array are initialized as 0 by default, the missing fragments are shown as black chunks

Debugging

Using Wireshark on the client's side, I searched for the index of such a missing fragment, and was surprised to find it, intact. This would mean that the data is received correctly on the transport layer (and observed by wireshark), but never read on the application layer.

This image shows that a chunk of a received array is missing, at index 174.000. Because there are 1447 data bytes in a packet, the index of this missing data corresponds to an UDP packet with the fragment index 121 (174.000/1447). The hexadecimal equivalent for 121 is 79. The following image shows the packet corrosponding UDP packet in wireshark, proving the data was still intact on the transport layer. image

What have I tried soo far

  1. When I lower the frame rate, there will be less black chunks, and they are often smaller. With a framerate of 3FPS there is no black at all. However, this frame rate is not desired. That is a speed of around (3fps * 720x480x3) 3.110.400 bits per second (379kb/s). A normal computer should be capable to read more bits per seconds than this. And as I explained, the packets DID arrive in wireshark, they are only not read in the application layer.

  2. I have also tried changing the UDP payload length from 1447 to 500. This only makes it worse, see image.

  3. I implemented multi threading so that data is read and processed in different threads.

  4. I tried a TCP implementation. The images were received intact, but it was not fast enough to transfer the images in real-time.

It is notable that a 'black chunk' does not represent a single missing fragment of 1447 bytes, but many consecutive fragments. So at some point when reading for data, a number of packets is not read. Also not every image has this problem, some are arrived intact.

I am wondering what is wrong with my implementation that results in this unwanted effect. So I will be posting some of my code below. Please note that the exception 'SocketException' is never really thrown and the Console.Writeline for 'invalid overhead' is also never printed. The _client.Receive always receives 1450 bytes, expect for the last fragment of an array, which is smaller.

Also

Besides solving this bug, if anyone has alternative suggestions for transmitting these arrays in a more efficient way (requiring less bandwidth but without quality loss), I would gladly hear it. As long as the solution has the array as input/output on both endpoints.

Most importantly: NOTE that the missing packets were never returned by the UdpClient.Receive() method. I did not post code for c++ application running on the raspberry PI, because the data did arrive (in wireshark) as I have already proved. So the transmission is working fine, but receiving is not.

private const int ClientPort = 50000;
private UdpClient _client;
private Thread _receiveThread;
private Thread _processThread;
private volatile bool _started;
private ConcurrentQueue<byte[]> _receivedPackets = new ConcurrentQueue<byte[]>();
private IPEndPoint _remoteEP = new IPEndPoint(IPAddress.Parse("192.168.4.1"), 2371);

public void Start()
{
    if (_started)
    {
         throw new InvalidCastException("Already started");
    }
    _started = true;
    _client = new UdpClient(_clientPort);
    _receiveThread = new Thread(new ThreadStart(ReceiveThread));
    _processThread = new Thread(new ThreadStart(ProcessThread));
    _receiveThread.Start();
    _processThread.Start();
}

public void Stop()
{
    if (!_started)
    {
        return;
    }
    _started = false;
    _receiveThread.Join();
    _receiveThread = null;
    _processThread.Join();
    _processThread = null;
    _client.Close();
}

public void ReceiveThread()
{
    _client.Client.ReceiveTimeout = 100;
    while (_started)
    {
        try
        {
            byte[] data = _client.Receive(ref _remoteEP);
            _receivedPackets.Enqueue(data);
        }
        catch(SocketException ex)
        {
            Console.Writeline(ex.Message);
            continue;
        }
    }
}

private void ProcessThread()
{
    while (_started)
    {
        byte[] data;
        bool dequeued = _receivedPackets.TryDequeue(out data);
        if (!dequeued)
        {
            continue;
        }
        int imgNr = data[0];
        int fragmentIndex = (data[1] << 8) | data[2];
        if (imgNr <= 0 || imgNr > 255 || fragmentIndex <= 0)
        {
            Console.WriteLine("Received data with invalid overhead");
            return;
        }
        // i omitted the code for this method because is does not interfere with the
        // socket and therefore not really relevant to the issue that i described
        ProccessReceivedData(imgNr, fragmentIndex , data);
    }
}
0

There are 0 best solutions below