Kubernetes Ingress for external service being modified to include namespace in the URL

52 Views Asked by At

I'm trying to create an Ingress for an external service in an old Kubernetes installation ("v1.16.15"), but for some reason when I apply the manifest, the URL (Host) is modified to include the namespace of the ingress.

My manifest:

apiVersion: v1
kind: Endpoints
metadata:
  name: mytest-service
  namespace: test-ns
subsets:
- addresses:
  - ip: X.X.X.X #External service IP
  ports:
  - name: port443
    port: 9443
    protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: mytest-service
  namespace: test-ns
spec:
  ports:
  - name: port443
    port: 443
    protocol: TCP
    targetPort: 443
  type: ClusterIP
  clusterIP: None
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/secure-backends: "true"
    nginx.ingress.kubernetes.io/ssl-passthrough: "false"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
  name: mytest-ingress
  namespace: test-ns
spec:
  rules:
  - host: mytest.internalurl.env-N.intranet
    http:
      paths:
      - backend:
          serviceName: mytest-service
          servicePort: 443
        path: /

Those three objects are created without issues, but when I check the ingress, the URL includes the namespace:

kubectl get ingress mytest-ingress -n test-ns
NAME            HOSTS                                      ADDRESS   PORTS   AGE
mytest-ingress  mytest.test-ns.internalurl.env-N.intranet  IP1,IP2   80      10m

Edit: there are multiple ingresses with the Host "internalurl.env-N.intranet" so this could be some sort of conflict avoidance:

kubectl get ingress -A | grep ' internalurl.env-N.intranet'
kube-system  kube-system.manager  internalurl.env-N.intranet  IP1,IP2  80  539d
otherNS      usermanagement       internalurl.env-N.intranet  IP1,IP2  80  539d
otherNS      component1           internalurl.env-N.intranet  IP1,IP2  80  539d
otherNS      componentN           internalurl.env-N.intranet  IP1,IP2  80  539d

But what I don't understand is that there are other ingresses with subdomains that do not include the namespace. So in the end, my question is what concept am I missing here? Does nginx have some sort of logic to resolve potential host conflicts?

1

There are 1 best solutions below

1
iamwillbin On

You may try upgrading your Kubernetes cluster to version 1.18 or later.

Newer versions introduced the networking.k8s.io/v1 Ingress API group, which offers separate Ingress and ExternalIngress resources. ExternalIngress lets you specify an external address without including the namespace.