NGINX Reverse Proxy Server sending RST and FIN every 5 Minutes of inactivity

83 Views Asked by At

My current nginx setup always kill the TCP connection after 5 minutes of inactivity, i.e no transaction.

I have this setup which requires TLS1.2 connection connecting from my internal network [client application] to public network [server]. It only use TCP ports (not http/https) and establish with a server located at public network. The client application does not support TLS1.2 connection hence the introduction of nginx proxy/reverse proxy for TLS wrapping purpose. You may refer below :

             Internal Network                   | INTERNET/Public

[Client Application] <------> [NGINX Reverse Proxy] <----|----> [Public Server]

           <Non TLS TCP Traffic>             <TLS 1.2>
  • using stream module
  • no error shown in nginx error log
  • access log showing TCP 200 Status but the session only last 300s everytime. [Recorded in the access_log] nginx access log

Below is my nginx configuration

# more nginx.conf

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
worker_connections 2048;
}

stream {
resolver 127.0.0.1;
include /etc/nginx/conf.d/*.conf;

log_format basic '$remote_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time $upstream_addr'
'"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"';

access_log /var/log/nginx/stream.access.log basic;

error_log log_file;
error_log /var/log/nginx/error_log;

server {
listen 35012;
proxy_pass X.X.X.X:35012;
proxy_timeout 86400s;
proxy_connect_timeout 1200s;
proxy_socket_keepalive on;
ssl_session_cache shared:SSL:5m;
ssl_session_timeout 30m;

# For securing TCP Traffic with upstream servers.
proxy_ssl on;
proxy_ssl_certificate /etc/ssl/certs/backend.crt;
proxy_ssl_certificate_key /etc/ssl/certs/backend.key;
proxy_ssl_protocols TLSv1.2;
proxy_ssl_ciphers HIGH:!aNULL:!MD5;

# proxy_ssl_trusted_certificate /etc/ssl/certs/trusted_ca_cert.crt;
# proxy_ssl_verify on;
proxy_ssl_verify_depth 2;

#To have NGINX proxy previously negotiated connection parameters and use a so-called abbreviated handshake - Fast
proxy_ssl_session_reuse on;

}
}

?

After capturing the tcp packet and check via wireshark, I found out that the nginx is sending out the RST to the public server and then send FIN/ACK (refer attached pcap picture) to the client application. wireshare pcap

I have tried to enable keepalive related parameters as per the nginx config above and also check on the OS's TCP tunable and i could not find any related settings which make NGINX to kill the TCP connection.

Everytime if there's no transaction (inactivity) for more than 5 minutes the session will be terminated and the access log will record down the session with 300s+- for every single time.

Anyone encountering the same issue? I am stucked for now and appreciate if someone can share their insights on this.

0

There are 0 best solutions below