How to decide optimal settings for setMaxTotal and setDefaultMaxPerRoute?

7.5k Views Asked by At

I have a RestService running on 45 different machines in three datacenters (15 in each datacenter). I have a client library which uses RestTemplate to call these machines depending on where the call is coming from. If the call is coming from DC1, then my library will call my rest service running in DC1 and similarly for others.

My client library is running on different machines (not on same 45 machines) in three datacenters.

I am using RestTemplate with HttpComponentsClientHttpRequestFactory as shown below:

public class DataProcess {

    private RestTemplate restTemplate = new RestTemplate();
    private ExecutorService service = Executors.newFixedThreadPool(15);

    // singleton class so only one instance
    public DataProcess() {
        restTemplate.setRequestFactory(clientHttpRequestFactory());
    }

    public DataResponse getData(DataKey key) {
        // do some stuff here which will internally call our RestService
        // by using DataKey object and using RestTemplate which I am making below
    }   

    private ClientHttpRequestFactory clientHttpRequestFactory() {
        HttpComponentsClientHttpRequestFactory requestFactory = new HttpComponentsClientHttpRequestFactory();
        RequestConfig requestConfig = RequestConfig.custom().setConnectionRequestTimeout(1000).setConnectTimeout(1000)
                .setSocketTimeout(1000).setStaleConnectionCheckEnabled(false).build();
        SocketConfig socketConfig = SocketConfig.custom().setSoKeepAlive(true).setTcpNoDelay(true).build();

        PoolingHttpClientConnectionManager poolingHttpClientConnectionManager = new PoolingHttpClientConnectionManager();
        poolingHttpClientConnectionManager.setMaxTotal(800);
        poolingHttpClientConnectionManager.setDefaultMaxPerRoute(700);

        CloseableHttpClient httpClientBuilder = HttpClientBuilder.create()
                .setConnectionManager(poolingHttpClientConnectionManager).setDefaultRequestConfig(requestConfig)
                .setDefaultSocketConfig(socketConfig).build();

        requestFactory.setHttpClient(httpClientBuilder);
        return requestFactory;
    }

}

And this is the way people will call our library by passing dataKey object:

DataResponse response = DataClientFactory.getInstance().getData(dataKey);

Now my question is:

How to decide what should I choose for setMaxTotal and setDefaultMaxPerRoute in PoolingHttpClientConnectionManager object? As of now I am going with 800 for setMaxTotal and 700 setDefaultMaxPerRoute? Is this a reasonable number or should I go with something else?

My client library will be used under very heavy load in multithreading project.

3

There are 3 best solutions below

8
On BEST ANSWER

There are no formula or a recipe that one can apply to all scenarios. Generally with blocking i/o one should have approximately the same max per route setting as the number of worker threads contending for connections.

So, having 15 worker threads and 700 connection limit makes little sense to me.

1
On

Let us try to come up with a formula for computing the poolsize.

R: average response time of a http call in millisecond
Q: required throughput in requests per second

In order to achieve Q, you will need approximately t = Q*R/1000 threads to process your requests. For all these threads not to be contending for the http-connection, you should have atleast t connections in the pool at any point in time.

Example: I have a web server which fetches the result and return it as a response.

Q = 700 rps
X = 50 ms
t = 35

So you would need atleast 35 connections per http-route and your total connections would be 35 * no. of routes (3).

PS: This is a very simple formula however the relation (between poolsize and throughput and response-time) is not straightforward. One particular case which comes to mind is that after a certain value the response-time starts increasing on increasing the pool-size.

0
On

Apparently there is not a definite and worthy formula into the situation. The relation between poolsize and throughput and response-time is not simple enough. One particular case which comes to mind is that after a certain value the response-time starts increasing on increasing the pool-size.

Generally with blocking i/o one should have approximately the same max per route setting as the number of worker threads contending for connections.