Intermittent connection from Nginx in Minikube

52 Views Asked by At

I'm experiencing this error from Nginx, running in minikube:

    2024/03/21 07:07:05 [error] 22#22: *9 connect() failed (111: Connection refused) while connecting to upstream, client: 10.244.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://10.101.14.185:80/", host: "192.168.49.2:30590"
    10.244.0.1 - - [21/Mar/2024:07:07:05 +0000] "GET / HTTP/1.1" 502 497 "-" "curl/8.3.0" "-"
    10.244.0.1 - - [21/Mar/2024:07:07:06 +0000] "GET / HTTP/1.1" 307 0 "-" "curl/8.3.0" "-"
    10.244.0.1 - - [21/Mar/2024:07:07:07 +0000] "GET / HTTP/1.1" 307 0 "-" "curl/8.3.0" "-"
    2024/03/21 07:07:08 [error] 22#22: *15 connect() failed (111: Connection refused) while connecting to upstream, client: 10.244.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://10.101.14.185:80/", host: "192.168.49.2:30590"
    10.244.0.1 - - [21/Mar/2024:07:07:08 +0000] "GET / HTTP/1.1" 502 497 "-" "curl/8.3.0" "-"

My minikube setup

NAME                                    READY   STATUS    RESTARTS   AGE
pod/abc-deployment-7bf886b77d-fqnn4     1/1     Running   0          15m
pod/abc-deployment-7bf886b77d-xg7n4     1/1     Running   0          15m
pod/frontend-9b5d57d49-2gmlc            1/1     Running   0          6m33s
pod/mongodb-deployment-0                1/1     Running   0          15m

NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/abc-service        ClusterIP      10.101.14.185   <none>        80/TCP         15m
service/frontend           LoadBalancer   10.99.135.28    <pending>     80:30590/TCP   15m
service/kubernetes         ClusterIP      10.96.0.1       <none>        443/TCP        18m
service/mongodb-service2   ClusterIP      10.96.89.248    <none>        27018/TCP      15m

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/abc-deployment     2/2     2            2           15m
deployment.apps/frontend           1/1     1            1           6m33s

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/abc-deployment-7bf886b77d     2         2         2       15m
replicaset.apps/frontend-9b5d57d49            1         1         1       6m33s

NAME                                  READY   AGE
statefulset.apps/mongodb-deployment   1/1     15m

My nginx setup: default.conf

server {
    listen       80;
    listen  [::]:80;
    server_name  127.0.0.1;

    #access_log  /var/log/nginx/host.access.log  main;

    location / {
      proxy_pass http://abc-service;
      proxy_set_header Connection keep-alive;
            #        root   /usr/share/nginx/html;
            #        index  index.html index.htm;
    }

The service url:

| NAMESPACE |   NAME   | TARGET PORT |            URL            |
|-----------|----------|-------------|---------------------------|
| default   | frontend |          80 | http://192.168.49.2:30590 |
|-----------|----------|-------------|---------------------------|
  Opening service default/frontend in default browser...
  http://192.168.49.2:30590

The pod host config:

# Kubernetes-managed hosts file.
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.0.9      frontend-9b5d57d49-2gmlc

The abc-service setup:

Name:              abc-service
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=abc
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.101.14.185
IPs:               10.101.14.185
Port:              <unset>  80/TCP
TargetPort:        1740/TCP
Endpoints:         10.244.0.4:1740,10.244.0.5:1740,10.244.0.9:1740
Session Affinity:  None
Events:            <none>

The image is running 1740 but the node port is exposed to 80.

1st: The nginx will experience intermittent lost of connection to abc-service

2nd: Even though it is a load balancer, it doesn't get redirect to the other pod. It only redirect to one pod. Did I misunderstood its setup?

3rd: I only setup 2 replicas but I got 3 endpoints(10.244.0.4:1740,10.244.0.5:1740,10.244.0.9:1740). What is the last one for?

0

There are 0 best solutions below