EKS with VPC CIN after applying NetworkPolicy has intermittant connection timeouts

34 Views Asked by At

In my EKS cluster, I have following deployed

RabbitMQ Cluster - namspace: queue
Application - namspace: myapp

  • backend-server
  • backend-worker
  • nginx-server

I'm using Celery with RabbitMQ. Out of the box this setup works well with no errors in logs.

backend-server exposes containerPort: 5000 and there's a service pointed to it.

I've install AWS VPC CNI Addon to create NetworkPolicies. For a test I applied below

---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: default-deny
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
    - Ingress
  ingress: []
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: backend-server
spec:
  podSelector:
    matchLabels:
      app: backend-server
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: nginx-server
      ports:
        - port: 5000

This should not block anything between backend-worker and RabbitMQ cluster. It is still reachable, but I'm getting below logs in backend-worker every 5 minutes

2023-12-21 08:02:48,487 - celery.worker.consumer.gossip - INFO - MainProcess - CORE - None - missed heartbeat from celery@worker-7b5868447c-fxv9k
2023-12-21 08:07:53,599 - celery.worker.consumer.gossip - INFO - MainProcess - CORE - None - missed heartbeat from celery@worker-7b5868447c-fxv9k

When I remove default-deny rule the log goes away. Am I missing something here?

0

There are 0 best solutions below