Session affinity cookie not working anymore (Kubernetes with Nginx ingress)

1.8k Views Asked by At

An upgrade of our Azure AKS - Kubernetes environment to Kubernetes version 1.19.3 forced me to also upgrade my Nginx helm.sh/chart to nginx-ingress-0.7.1. As a result I was forced to change the API version definition to networking.k8s.io/v1 since my DevOps pipeline failed accordingly (a warning for old API resulting in an error). However, now I have the problem that my session affinity annotation is ignored and no session cookies are set in the response.

I am desperately changing names, trying different unrelated blog posts to somehow fix the issue.

Any help would be really appreciated.

My current nginx yaml (I have removed status/managed fields tags to enhance readability):

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nginx-ingress-infra-nginx-ingress
  namespace: ingress-infra 
  labels:
    app.kubernetes.io/instance: nginx-ingress-infra
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nginx-ingress-infra-nginx-ingress
    helm.sh/chart: nginx-ingress-0.7.1
  annotations:
    deployment.kubernetes.io/revision: '1'
    meta.helm.sh/release-name: nginx-ingress-infra
    meta.helm.sh/release-namespace: ingress-infra
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-ingress-infra-nginx-ingress
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx-ingress-infra-nginx-ingress
      annotations:
        prometheus.io/port: '9113'
        prometheus.io/scrape: 'true'
    spec:
      containers:
        - name: nginx-ingress-infra-nginx-ingress
          image: 'nginx/nginx-ingress:1.9.1'
          args:
            - '-nginx-plus=false'
            - '-nginx-reload-timeout=0'
            - '-enable-app-protect=false'
            - >-
              -nginx-configmaps=$(POD_NAMESPACE)/nginx-ingress-infra-nginx-ingress
            - >-
              -default-server-tls-secret=$(POD_NAMESPACE)/nginx-ingress-infra-nginx-ingress-default-server-secret
            - '-ingress-class=infra'
            - '-health-status=false'
            - '-health-status-uri=/nginx-health'
            - '-nginx-debug=false'
            - '-v=1'
            - '-nginx-status=true'
            - '-nginx-status-port=8080'
            - '-nginx-status-allow-cidrs=127.0.0.1'
            - '-report-ingress-status'
            - '-external-service=nginx-ingress-infra-nginx-ingress'
            - '-enable-leader-election=true'
            - >-
              -leader-election-lock-name=nginx-ingress-infra-nginx-ingress-leader-election
            - '-enable-prometheus-metrics=true'
            - '-prometheus-metrics-listen-port=9113'
            - '-enable-custom-resources=true'
            - '-enable-tls-passthrough=false'
            - '-enable-snippets=false'
            - '-ready-status=true'
            - '-ready-status-port=8081'
            - '-enable-latency-metrics=false'

My ingress configuration of the service name "account":

kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
  name: account
  namespace: infra
  resourceVersion: '194790'
  labels:
    app.kubernetes.io/managed-by: Helm
  annotations:
    kubernetes.io/ingress.class: infra
    meta.helm.sh/release-name: infra
    meta.helm.sh/release-namespace: infra
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/proxy-buffer-size: 128k
    nginx.ingress.kubernetes.io/proxy-buffering: 'on'
    nginx.ingress.kubernetes.io/proxy-buffers-number: '4'
spec:
  tls:
    - hosts:
        - account.infra.mydomain.com
      secretName: my-default-cert **this is a self-signed certificate with cn=account.infra.mydomain.com
  rules:
    - host: account.infra.mydomain.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              serviceName: account
              servicePort: 80
status:
  loadBalancer:
    ingress:
      - ip: 123.123.123.123 **redacted**

My account service yaml

kind: Service
apiVersion: v1
metadata:
  name: account
  namespace: infra
  labels:
    app.kubernetes.io/instance: infra
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: account
    app.kubernetes.io/version: latest
    helm.sh/chart: account-0.1.0
  annotations:
    meta.helm.sh/release-name: infra
    meta.helm.sh/release-namespace: infra
spec:
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app.kubernetes.io/instance: infra
    app.kubernetes.io/name: account
  clusterIP: 10.0.242.212
  type: ClusterIP
  sessionAffinity: ClientIP **just tried to add this setting to the service, but does not work either**
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800
status:
  loadBalancer: {}
1

There are 1 best solutions below

2
Drain On BEST ANSWER

Ok, the issue was not related to any configuration shown above. The debug logs of the nginx pods were full of error messages in regards to the kube-control namespaces. I was removing the Nginx helm chart completely and used the repositories suggested by Microsoft:

https://learn.microsoft.com/en-us/azure/aks/ingress-own-tls

# Create a namespace for your ingress resources
kubectl create namespace ingress-basic

# Add the ingress-nginx repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

# Use Helm to deploy an NGINX ingress controller
helm install nginx-ingress ingress-nginx/ingress-nginx \
    --namespace ingress-basic \
    --set controller.replicaCount=2 \
    --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
    --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux