RFC 7230 defines HTTP/1.1 protocol and it has an interesting passage in 6.6, "Connection management. Tear-down":
To avoid the TCP reset problem, servers typically close a connection in stages. First, the server performs a half-close by closing only the write side of the read/write connection. The server then continues to read from the connection until it receives a corresponding close by the client, or until the server is reasonably certain that its own TCP stack has received the client's acknowledgement of the packet(s) containing the server's last response. Finally, the server fully closes the connection.
Basically it boils down to the following:
shutdown(s, SD_SEND);
while (recv(s, throaway_buffer, throaway_buffer_len, 0) > 0);
closesocket(s);
which is the standard way of doing the graceful shutdown. However, it also acknowledges that a misbehaving client may exist (that keeps sending requests even after receiving a response with Connection: close
header), and that the server has to cope with it by resetting the connection after it's sure the client has received the last response.
However, the socket interface doesn't seem to provide the functionality to learn whether all data passed to send
have been actually sent and ACK'd by the remote host. Is it actually there? Without it, all I can think about is to set up a timer of sorts, and call recv
until either it signals that the remote host has closed connection or the time is out, whichever comes first. But what would be the appropriate timeout? Is 60 seconds okay?
The Sockets interface provides this mean via the little-used and less understood
SO_LINGER
option. It allows you inter alia to define a timeout during whichclose()
and possiblyshutdown()
will block while pending data is being sent. It is of little practical use and as I've stated it is rarely used ... at least rarely used correctly.