we use haproxy-ingress v0.13.6 (HA-Proxy version 2.3.17-d1c9119 2022/01/11) (and the previous haproxy-ingress versions performed the same) on k8s 1.19 and from time to time we have a problem when haproxy pod restarts due to liveness, node, or other reasons, it starts without the servers section in that case the lines listed below are not appearing inside /etc/haproxy/haproxy.cfg file
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# #
# # TCP SERVICES (legacy, via configmap)
# #
#
listen _tcp_namespace_project-haproxy-stats_111
bind :111
mode tcp
log-format "[%t] %ci:%cp %si:%sp %bi:%bp %fi:%fp %ft %b/%s %Tw/%Tc/%Tt %B %ts %ac/%fc/%bc/%sc/%rc %sq/%bq"
option log-health-checks
option tcp-check
tcp-check send "AAA"
tcp-check expect string "BBB"
# Recognized options podMaxConn: 2
default-server check port 111 inter 2s maxconn 2 observe layer4 error-limit 1 on-error mark-down
server-template project-worker.namespace- 3 project-worker.namespace.svc.cluster.local:111 resolvers kubernetes resolve-prefer ipv4 init-addr none
I'm able to reproduce that problem by manually issuing 'reboot' command using k8s exec, that needs several attempts.
k8s rollout restart deployment resolves that problem else patching tcp-service configmap helps, if I'm modifying, appending, or deleting some lines in the configmap, haproxy reloads the config and starts working according to the configmap content reload manually with /haproxy-reload.sh doesn't help
Another difference between working and not working correctly instances
/ # ps axjf | more
PID USER TIME COMMAND
...
34 root 0:00 haproxy -f /etc/haproxy -p /var/run/haproxy/haproxy.pid -D -sf
...
and that's from the healthy one
/ # ps axjf | more
PID USER TIME COMMAND
...
43 root 0:00 haproxy -f /etc/haproxy -p /var/run/haproxy/haproxy.pid -D -sf 31 -x
...