Istio ingress gateway : domain name and port forwarding

4.3k Views Asked by At

I have set up an Istio service mesh. It works fine as I want so far. From outside I can only access with the port number like http://www.mytest.com:41333. What do I have to do to forward 80 to 41333 so that I can access it with http://www.mytest.com

Here is my Gateway :

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: mytest-gateway
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "www.mytest.com"

Not sure what to do...

1

There are 1 best solutions below

0
On

I assume your istio ingress gateway service type is NodePort, if you istio ingress gateway is NodePort then you have to use http://www.mytest.com:41333.

If you want to use http://www.mytest.com then you would have to change it to LoadBalancer.

You can check if your istio ingress gateway is NodePort with

kubectl get svc -n istio-system 

And check istio ingress gateway type.

NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting NodeIP:NodePort.

LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.

As mentioned in istio documentation

If the EXTERNAL-IP value is (or perpetually ), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.


If you use cloud like aws you can configure Istio with AWS Load Balancer with appropriate annotations.

On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for your Service. The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer is published in the Service's .status.loadBalancer


If it´s on premise, like minikube, then you could take a look at metalLB

MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.

Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.

Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both of these options have significant downsides for production use, which makes bare metal clusters second class citizens in the Kubernetes ecosystem.

MetalLB aims to redress this imbalance by offering a Network LB implementation that integrates with standard network equipment, so that external services on bare metal clusters also “just work” as much as possible.

You can read more about it in below link: