In Azure Kubernetes Service, I am trying to setup Locust load test, as of now locust test contains 1 master and 2 slave pods. With default dnspolicy provided by kubernetes, Slave pods able to establish connection with master pod, which I had confirmed in locust webpage. But to run locust test successfully slave pods need to connect to other services so had to use custom dnspolicy in slave pods.

Once I apply the custom dnspolicy in slave pods, slave pods are not able to connect to master pod. I tried to apply the same dnspolicy slave uses, in master deployment file but still slave pods are not able to establish connection with master pod.

I am not sure what am I missing in this case, how to establish connection between slave with pods custom dnspolicy to master pod which uses default dns policy provided by Azure kubernetes

slave deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    io.kompose.service: slave
  name: slave
spec:
  replicas: 2
  selector:
    matchLabels:
      io.kompose.service: slave
  strategy: {}
  template:
    metadata:
      labels:
        io.kompose.service: slave
    spec:
      imagePullSecrets: 
      - name: secret
      containers:
        - args:
            - -f
            - /usr/app/locustfile.py
            - --worker
            - --master-host
            - master
          image: xxxxxxxx/locust-xxxxxx:locust-slave-1.0.2
          name: slave
          resources: {}
          securityContext:
            privileged: true
            capabilities:
              add:
                - NET_ADMIN
      dnsPolicy: "None"
      dnsConfig:
        nameservers:
          - xx.xx.xx.xx 
          - xx.xx.xx.xx
        searches:
          - xxxxxx
          - locust-xxxx.svc.cluster.local 
          - svc.cluster.local 
          - cluster.local
          - xxxxxxxxxxxxxx.jx.internal.cloudapp.net
        options:
          - name: ndots
            value: "0"
      restartPolicy: Always
status: {}

master deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    io.kompose.service: master
  name: master
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: master
  strategy: {}
  template:
    metadata:
      labels:
        io.kompose.service: master
    spec:
      imagePullSecrets: 
      - name: secret
      containers:
        - image: xxxxxxxxxxxxx/locust-xxxxxxxxxxx:locust-master-1.0.1
          name: master
          resources: {}
      restartPolicy: Always
status: {}

I am new to networking side of things

1

There are 1 best solutions below

0
Sar On

It was not the issue with kubernetes, I was able to establish connection between master and slave with the help of this link www.github.com/locustio/locust/issues/294. What was missing was a env variable so I added these env variable in slave deployment.yaml file

env:
- name: LOCUST_MODE 
  value: slave 
- name: LOCUST_MASTER 
  value: master