Probot based GitHub app deployed as a docker container on AKS (Azure Kubernetes Service) cannot receive webhooks

78 Views Asked by At

I wrote a GitHub app using Probot that I successfully tested locally using smee.io. I then tried to take my app into production by deploying it to a Kubernetes cluster as a docker container on AKS(Azure Kubernetes Service), setup an ingress to receive the webhooks from my app on the container but I keep seeing 404 error.

I used this dockerfile as an example and was able to successfully build the image and push it to ACR (Azure Container Registry). Below is my dockerfile.

FROM my-image-registry/node:18-alpine3.16 as build

# set working directory 
WORKDIR /app/probot/

ENV PATH /app/node_modules/.bin:$PATH

# install and cache app dependencies

COPY ./.npmrc ./
COPY ./package.json ./
# COPY ./tsconfig.json ./

RUN npm cache clean --force && npm install --force --loglevel verbose

FROM my-image-registry/node:18-alpine3.16 as app

WORKDIR /app/probot/

COPY --from=build /app/probot/node_modules ./node_modules
COPY . ./

RUN npm run build

EXPOSE 3005

COPY .env ./

#CMD npm run start
CMD [ "npm", "start" ]

As my next step I deployed the container to Kubernetes using kubectl and I am able to have it up and running successfully. Below is my kubectl inline configuration.

---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: probot-service
  name: my-app-pod
  namespace: my-namespace
spec:
  replicas: 1
  selector:
    matchLabels:
      app: probot-service
  template:
    metadata:
      labels:
        app: probot-service
    spec:
      containers:
        - name: probot-service
          image: myacrurl/my-app-docker-image:d1.0
          imagePullPolicy: Always
          ports:
            - containerPort: 3005
      nodeSelector:
        agentpool: apppool

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: probot-service
  name: probot-service
  namespace: my-namespace
spec:
  ports:
    - name: http
      port: 80
      targetPort: 3005
    - name: https
      protocol: TCP
      port: 443
      targetPort: 3005
  selector:
    app: probot-service
  type: ClusterIP

When I run Kubectl logs <podname> -n <namespace>, I see the message

INFO (server): Running Probot v13.0.2 (Node.js: v18.8.0)
INFO (server): Listening on http://localhost:3005

I have setup an ingress to have the webhooks from my app routed to the container. Below is my ingress configuration

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: probot-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
  rules:
  - host: probot.mydomain.com
    http:
      paths:
      - pathType: Prefix
        path: /probot(/|$)(.*)
        backend:
          service:
            name: probot-service
            port:
              number: 80

I have tried hardcoding my app id, webhook secret, installation ID and other required parameters for my probot app. I tried adding them environment variables. I tried copying the .env file to the docker image during image creation and nothing seems to work.

What could I be doing wrong? Is it even possible to deploy probot apps on AKS?

1

There are 1 best solutions below

2
Joey Chen On

One key part of the issue is: kubernetes.io/ingress.class: nginx.
This is a deprecated annotation. Should use spec.ingressClassName: nginx instead. Check official document for demo ingress yaml.

Note: nginx-ingress only loads your ingress by specific keyword. So this is one of the reasons why it is not working.


The second issue is: INFO (server): Listening on http://localhost:3005.
In simple: listen on 0.0.0.0 instead.

An example:

2c2d4a84d354:/# kubectl get po -o wide
NAME                      READY STATUS   RESTARTS AGE   IP           NODE                            NOMINATED NODE READINESS GATES
my-nginx-79b55879bb-7kzgz 1/1   Running  0        3m10s 198.18.4.15  aks-userpool-123456-vmss000006  <none>         <none>
my-nginx-79b55879bb-mqv7d 1/1   Running  0        116s  198.18.1.244 aks-userpool-123456-vmss000000  <none>         <none>

Go inside the Pod my-nginx-79b55879bb-7kzgz, and configured nginx conf like below:

listen       80;
listen  [::]:80;
listen       localhost:8000;
server_name  localhost;

Check listen status in Pod:

root@my-nginx-79b55879bb-7kzgz:/# lsof -i:80 && lsof -i:8000
COMMAND PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
nginx     1 root    6u  IPv4 4425979      0t0  TCP *:80 (LISTEN)
nginx     1 root    7u  IPv6 4425980      0t0  TCP *:80 (LISTEN)
COMMAND PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
nginx     1 root   13u  IPv6 4462662      0t0  TCP localhost:8000 (LISTEN)
nginx     1 root   14u  IPv4 4462663      0t0  TCP localhost:8000 (LISTEN)

Let's go inside the other Pod and curl this Pod:

root@my-nginx-79b55879bb-mqv7d:/# curl 198.18.4.15:8000 --head
curl: (7) Failed to connect to 198.18.4.15 port 8000 after 1 ms: Couldn't connect to server
root@my-nginx-79b55879bb-mqv7d:/# curl 198.18.4.15:80 --head
HTTP/1.1 200 OK
Server: nginx/1.25.4
Date: Fri, 01 Mar 2024 09:01:26 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 14 Feb 2024 16:03:00 GMT
Connection: keep-alive
ETag: "65cce434-267"
Accept-Ranges: bytes

In conclusion, binding listener to localhost make your backend not accessible from outside of the Pod.