How to expose a NATS server externally

1.4k Views Asked by At

I have deployed NATS (https://nats.io/) into my Kubernetes cluster which is running on AWS and I am trying to expose this service externally.

These are the current details of my nats service.

NAME   TYPE        CLUSTER-IP   EXTERNAL-IP  
nats  ClusterIP     None         None       

Port(s)

4222/TCP,6222/TCP,8222/TCP,7777/TCP,7422/TCP,7522/TCP                     

Currently, the nats service is a ClusterIP service and when I try to patch it to become a LoadBalancer service with this command:

kubectl patch svc nats -p '{"spec": {"type": "LoadBalancer"}}'

It leads to this error:

The Service "nats" is invalid: spec.clusterIP: Invalid value: "None": may not be set to 'None' for LoadBalancer services.

Hence, how can I be actually expose this Nats service externally? Any guidance provided will be greatly appreciated.

2

There are 2 best solutions below

1
On

You probably noticed that your .spec.clusterIP is set to None. Setting None will make the service headless. This is the exact reason why you are unable to patch this service.

Headless Service are used for service discovery mechanism and your nats service is used exactly for that purpose. Basically with headless service instead of returning single DNS A records, the DNS server will return multiple A records for your service each pointing to the IP of an individual pods that backs the service. So you do simle DNS A records lookup and get the IP of all of the pods that are part of the service.

In its essence nats service is used for cluster-advertising:

          #this is snippet from your statefulset manifest (you can generate that using 'helm template'  
        - name: CLUSTER_ADVERTISE
          value: $(POD_NAME).RELEASE-NAME.$(POD_NAMESPACE).svc

If there exists a headless service in the same namespace as the pod and with the same name as the subdomain, the cluster's DNS Server also returns an A or AAAA record for the Pod's fully qualified hostname. For example, given a Pod with the hostname set to "busybox-1" and the subdomain set to "default-subdomain", and a headless Service named "default-subdomain" in the same namespace, the pod will see its own FQDN as "busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example". DNS serves an A or AAAA record at that name, pointing to the Pod's IP. Both pods "busybox1" and "busybox2" can have their distinct A or AAAA records.

Taken from services-networking/dns-pod-service.

Finally, instead of patching this service you should create a new service with type Loadbalancer.


For more reading please check:

0
On

When I started working with NATS I had a similar issue. For me, the best and easiest solution was to do port-forwarding:

kubectl port-forward service/nats 4222:4222

after doing this, you should be able to do:

nats server ping -s nats://localhost:4222