How to access kubernetes service externally on bare metal install

3.5k Views Asked by At

Trying to make a bare metal k8s cluster to provide some services and need to be able to provide them on tcp port 80 and udp port 69 (accessible from outside the k8s cluster.) I've set the cluster up using kubeadm and it's running ubuntu 16.04. How do I access the services externally? I've been trying to use load-balancers and ingress but am having no luck since I'm not using an external load balancer (Local rather than AWS etc.)

An example of what I'm trying to do can be found here but it's using GCE.

Thanks

3

There are 3 best solutions below

0
On

If I was going to do this on my home network I'd do it like this:
Configure 2 port forwarding rules on my router, to redirect traffic to an nginx box acting as a L4 load balancer.

So if my router IP was 1.2.3.4 and my custom L4 nginx LB was 192.168.1.200
Then I'd tell my router to port forward:
1.2.3.4:80 --> 192.168.1.200:80
1.2.3.4:443 --> 192.168.1.200:443

I'd follow this https://kubernetes.github.io/ingress-nginx/deploy/
and deploy most of what's in the generic cloud ingress controller (That should create an ingress controller pod, an L7 Nginx LB deployment and service in the cluster, and expose it on nodeports so you'd have a NodePort 32080 and 32443 (note they would actually be random, but this is easier to follow))
(Since you're working on bare metal I don't believe it'd be able to auto spawn and configure the L4 load balancer for you.)


I'd then Manually configure the L4 load balancer to load balance traffic coming in on
port 80 ---> NodePort 32080
port 443 ---> NodePort 32443
So betweeen that big picture of what do do and the following link you should be good.
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/

(Btw this will let you continue to configure your ingress with the ingress controller)

Note: I plan to setup a bare metal cluster in my home closet in a few months so let me know how it goes!

0
On

If you have just one node deploy the ingress controller as a daemonset with host port 80. Do not deploy a service for it

If you have multiple nodes; with cloud providers a load balancer is a construct outside the cluster that's basically an HA proxy to each node running pods of your service on some port(s). You could do this kind of thing manually, for any service you want to expose set type to NodePort with some port in the allowed range (somewhere in the 30k) and spinup another VM with a TCP balancer (such as nginx) to all your nodes on that port. You'll be limited to running as many pods as you have nodes for that service

3
On

Service with NodePort

Create a service with type NodePort, Service can be listening TCP/UDP port 30000-32767 on every node. By default, you can not simply choose to expose a Service on port 80 on your nodes.

kind: Service
apiVersion: v1
metadata:
  name: my-service
spec:
  selector:
    app: MyApp
  ports:
  - protocol: TCP
    port: {SERVICE_PORT}
    targetPort: {POD_PORT}
    nodePort: 31000
  - portocol: UDP
    port: {SERVICE_PORT}
    targetPort: {POD_PORT}
    nodePort: 32000
  type: NodePort

The container image gcr.io/google_containers/proxy-to-service:v2 is a very small container that will do port-forwarding for you. You can use it to forward a pod port or a host port to a service. Pods can choose any port or host port, and are not limited in the same way Services are.

apiVersion: v1
kind: Pod
metadata:
  name: dns-proxy
spec:
  containers:
  - name: proxy-udp
    image: gcr.io/google_containers/proxy-to-service:v2
    args: [ "udp", "53", "kube-dns.default", "1" ]
    ports:
    - name: udp
      protocol: UDP
      containerPort: 53
      hostPort: 53
  - name: proxy-tcp
    image: gcr.io/google_containers/proxy-to-service:v2
    args: [ "tcp", "53", "kube-dns.default" ]
    ports:
    - name: tcp
      protocol: TCP
      containerPort: 53
      hostPort: 53

Ingress

If there are multiple services sharing same TCP port with different hosts/paths, deploy the NGINX Ingress Controller, which listening on HTTP 80 and HTTPS 443.

Create an ingress, forward the traffic to specified services.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        backend:
          serviceName: test
          servicePort: 80