I am sending a protobuf from C++ to Java via a raw socket, the C++ program being the client and the java program being the server. The C++ program generates packets almost every 1ms which is sent to the java program.
If I run the program normally, I see that there are only the half the packets being received.
If I set a breakpoint in the C++ program and then run the client and the server, all the packets are received.
How do I ensure that all packets are received without setting a breakpoint? Can I introduce a delay?
All the packets have bytes sizes upto a maximum of 15 bytes.
By default TCP sockets use the "Nagle Algorithm" which will delay transmission of the next "unfilled" fragment in order to reduce congestion. Your packet size is small enough and the time delay between packets is small enough that the nagle algorithm will have an effect on your transmissions.