I am running a GKE cluster version 1.17.13-gke.1400.
I have applied the following network policy in my cluster -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Which should block all communication to or from pods on the default namespace. However, it does not. As is evident from this test -
$ kubectl run p1 -it --image google/cloud-sdk
root@p1:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=114 time=1.14 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=114 time=1.21 ms
^C
root@p1:/# curl www.google.com
<!doctype html><html itemscope=" ...
From the docs, seems like this application should be pretty straight forward. Any help in understanding what I'm doing wrong, or tips for further troubleshooting, will be appreciated.
Thanks, Nimrod,
For Network Policies to take effect, your cluster needs to run a network plugin which also enforces them. Project Calico or Cilium are plugins that do so. This is not the default when creating a cluster!
So first, you should check if your cluster is set up accordingly as described in the Google Cloud Network Policies docs. This is somehow abstracted away behind the
--enable-network-policy
flag.If it is enabled, you should see some calico pods in the
kube-system
namespace.kubectl get pods --namespace=kube-system
If there is a plugin in place which enforces network policies, you need to make sure to have deployed the network policy in the desired namespace - and check if your test using
kubectl run
is executed in that namespace, too. You might have some other namespace configured in your kube context and not hit the default namespace with your command.