503 error on HTTP to HTTPS redirect with nginx-ingress when enabling Istio sidecar injection

215 Views Asked by At

I am switching from the Kubernetes community module kubernetes/ingress-nginx to the official NGINX module nginxinc/kubernetes-ingress so I can use the CRDs that NGINX Ingress defines (those are not available on the Kubernetes' community module).

I am also using Istio in my existing setup. I have followed the instructions as per the NGINX documentation.

I run this on AWS EKS, and I want to use an NLB as Load Balancer for the ingress controller. This seems to work fine so far.

Now, I am trying to configure redirection of HTTP to HTTPS in the Ingress Controller (terminating the TLS traffic to the NLB), here is my custom-values.yaml YAML file for the NGINX Ingress controller:

controller:
  # https://docs.nginx.com/nginx-ingress-controller/installation/installing-nic/installation-with-helm/#configuration

  replicaCount: 2

  nodeSelector:
    kubernetes.io/os: linux
    usage: ingress-nginx-nodes
    environment: test
    subnet_type: private

  watchNamespace: test

  # Not sure if those should be here or in the pod. When adding them to the pod was getting errors from patching the
  # admission controllers. Keep an eye for any issues.
  annotations:
    traffic.sidecar.istio.io/includeInboundPorts: ""
    traffic.sidecar.istio.io/excludeInboundPorts: "80,443"
    traffic.sidecar.istio.io/excludeOutboundIPRanges: "<my-cluster-ip>/16"

  setAsDefaultIngress: false
  ingressClass:
    create: true

  config:
    entries:
      client-max-body-size: "4m"
      proxy-max-body-size: "4m"
#      use-forwarded-headers: "true"
#      redirect-to-https: "true"
#      proxy-protocol: "True"
#      real-ip-header: "proxy_protocol"
#      set-real-ip-from: "0.0.0.0/0"
#      ssl-redirect: "true"       # we use `special` port to control ssl redirection
      ssl-redirect: "false"       # we use `special` port to control ssl redirection
      server-snippet: |
        listen 2443;
        return 308 https://$host$request_uri;

  customPorts:
  - name: "http-special"
    containerPort: 2443
    protocol: TCP

  service:
    externalTrafficPolicy: Local
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
      # We are using both tags for internal, not sure why (one is deprecated and EKS is confused?) but we have to.
      service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
      service.beta.kubernetes.io/aws-load-balancer-internal: "true"
      service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
      service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxxxxx
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:eu-west-1:xxxxx:certificate/xx-yy-zz-aa-bb"


    httpPort:
      targetPort: 2443
#      targetPort: 80
    httpsPort:
      targetPort: 80
#      targetPort: 2443

  pod:
    extraLabels:
      # This will inject Istio's envoy proxy. It must be a LABEL in the pod, NOT and ANNOTATION!
      sidecar.istio.io/inject: "true"

    annotations:
      # To allow prometheus to scrape
      prometheus.io/scrape: "true"
      prometheus.io/port: "10254"
#      traffic.sidecar.istio.io/includeInboundPorts: ""
#      traffic.sidecar.istio.io/excludeInboundPorts: "80,443"
#      traffic.sidecar.istio.io/excludeOutboundIPRanges: "10.100.0.1/16"


prometheus:
  create: true
  port: 10254
  scheme: http

So basically I have followed a few examples found online and I am redirecting secure HTTPS traffic to NLB's HTTPS port on the HTTP port of the controller (port 80) and all the plain HTTP traffic to NLB (port 80) to a special port on the controller, which would instruct a simple redirect to HTTPS port (this in turn would follow the HTTPS route on the NLB to HTTP port on the controller).

I am installing the nginx-ingress controller with helm, using sources (storing the pulled repo in raw/nginx-ingress), in order to manage the CRDs, as per NGINX Ingress docs, and then:

helm upgrade --install nginx-ingress-official-test-internal \
  raw/nginx-ingress/. \
  --namespace nginx-ingress-official-test-internal \
  --create-namespace \
  --set controller.ingressClass.name=nginx-ingress-official-test-internal \
  -f custom-values.yaml

When I have NOT injected the Istio sidecar to the NGINX Ingress deployment, the redirection works great. So a call to http://<my-public-uri>/my-service-path/ is redirected to https://<my-public-uri>/my-service-path/.

But when I enable the sidecar injection, I get this 503 error when I try to load http://<my-public-uri>/my-service-path/:

upstream connect error or disconnect/reset before headers. reset reason: remote connection failure, transport failure reason: delayed connect error: 111

When I load https://<my-public-uri>/my-service-path/, everything loads fine.

I cannot find any trace to the logs of the nginx-ingress pods, neither on envoy proxy logs, nothing on the target deployment's pods. I have tried various things in the nginx-ingress configuration (some are commented out in the .yaml file above), mainly I am trying to manipulate those headers and settings, but without success. What is most painful is that I am unable to trace anything in any logs.

I have come across various links about this error on Istio (I am convinced that Istio is the culrpit here, as everything seems to work OK without it).

and a few others, but none of those seem close to what I need here.

I've been fighting with this for a few days now. I guess at the end I could just disable the HTTP endpoints, but ideally I'd like to sort this out properly, as I'd have to use the same (or actually another instance but the same module) in a public facing app, where I'd like to have HTTP to HTTPS redirection working for better UX.

Does someone have an idea or clue why this happens?

0

There are 0 best solutions below