**I'm working on a school project involving UDP streaming in Python, where a client (udp_stream.py) sends messages to a server (udp_stream_server.py). The client script is designed to send a specified number (800) of messages at a certain rate (40/s). However, I'm experiencing unexpected delays in the streaming process, and the actual execution time is longer than expected.
I've checked my code for potential bottlenecks and adjusted the sleep time, but the issue persists. The server script simply prints incoming messages and their corresponding client addresses.
Any insights on why the UDP streaming might be slower than anticipated would be greatly appreciated. I'm specifically seeking guidance on optimizing the Python UDP client for consistent performance.
Thank you! **
`udp_stream.py
from socket import *
import time
def returnTime():
t = time.localtime()
current_time = time.strftime("%H:%M:%S", t)
milliseconds = int((time.time() % 1) * 1000) # Extract milliseconds
return f"{current_time}.{milliseconds:03d}"
def log():
return f"[CLIENT:] {returnTime()}:"
def udp_stream(target_ip, target_port, message_total, message_rate):
client_socket = socket(AF_INET, SOCK_DGRAM)
message_number = 10001
message_size=1470
sent_messages = 0
first_sent = log();
for x in range(message_total):
sent_messages += 1
if sent_messages < message_total:
message = f"{message_number};{'A'*(message_size - len(str(message_number)) -1)}"
client_socket.sendto(message.encode(), (target_ip, target_port))
time.sleep(1 / message_rate)
message_number = message_number + 1
elif sent_messages == message_total:
message = f"{message_number};{'A' * (message_size - len(str(message_number)) - 5)}####"
client_socket.sendto(message.encode(), (target_ip, target_port))
message_number = message_number + 1
time.sleep(1 / message_rate)
last_sent = log()
print(first_sent, last_sent)
udp_stream('localhost',8080,800,40)`
`udp_stream_server.py
import time
def returnTime():
t = time.localtime()
current_time = time.strftime("%H:%M:%S", t)
milliseconds = int((time.time() % 1) * 1000) # Extract milliseconds
return f"{current_time}.{milliseconds:03d}"
from socket import *
serverPort = 8080
serverSocket = socket(AF_INET, SOCK_DGRAM)
serverSocket.bind(('', serverPort))
print(f'[SERVER: {returnTime()}]: UDP Server has started listening on port: {serverPort}')
while True:
# read client's message and remember client's address (IP and port)
message, clientAddress = serverSocket.recvfrom(1024)
# Print message and client address
print (f"[SERVER: {returnTime()}]: Message from client: ",message.decode())
print (f"[SERVER: {returnTime()}]: Client-IP: ",clientAddress)`
Checked for potential bottlenecks in the code. Adjusted the sleep time to ensure the desired message rate. Examined the server script (udp_stream_server.py), which simply prints incoming messages and client addresses.
As mentioned in comments, compute the absolute time to send each message and sleep until that time. In this way variations in OS thread scheduling and processing each message are averaged out.
Below is an implementation of a server that receives a message of the expected test parameters, then receives and notes the time of each message for later computing of statistics. Note that in the event of a client aborting early or due to UDP being unreliable and could drop or out-of-order packets, a sync packet was implemented to make sure the server was ready for the next test and wouldn't have to be restarted.
Below is a client for that server that sends the test requirements and sends messages at specific absolute times based on the message rate:
Example run of rates 1, 10, 40, 1000, 3000:
On my system only 1 packet at rate 1000 was sent after the expected absolute time (
sleep_time < 0) but the min/max intervals had a larger spread. At rate 3000 the system couldn't maintain the rate and it took ~10.7 seconds to send 3000 messages instead of the expected 10 seconds.