strange close_wait connection behavior using wildfly connection pool with sybase DB

107 Views Asked by At

(using wildfly 23, jpa, and sybase anywhere RDBMS) I have a strange behavior in one of our production environments. I am kind of lost and would appreciate suggestions on how to tackle this: at a specific point in time, that we cannot reproduce, connections in the wildfly datasource move to 'close_wait' status. the close_wait numbers are accumulated exponentially until the pool is exhausted and the application is frozen and needs to be killed.

An interesting thing to note: we have a backgorund script running each minute counting the amount of close_wait conenctions using 'netstat -anl | grep 50000 | grep CLOSE_WAIT | wc -l' (where 50000 is the database port). On a regular basis, the output of this netstat ccount is 0. Suddently, in a way we cannot connect to the application uptime, other use cases that are running, etc, the count number jumps, in a matter of minutes, to hundreds of close_wait connection (our pool limit is large - about 500). Below is a log snippet of the close_wait count just to show that in a matter of minutes the count starts to expenentially grow.

...
08:54 AM - 0[CLOSE_WAIT] connections were found.
08:55 AM - 0[CLOSE_WAIT] connections were found.
08:56 AM - 45[CLOSE_WAIT] connections were found.
08:58 AM - 136[CLOSE_WAIT] connections were found.

reading some posts on similar issues I understand that for close_wait to happen one of the sides need to terminate the connection. how to i determine which side? and what reasons could 'make' that side to want to close the connection? this is happening in the middle of normal application work suddenly... your help is appreciated...

0

There are 0 best solutions below