What is the difference in privilege granted to a container in the following 2 scenarios
- sudo docker run -d --privileged --pid=host alpine:3.8 tail -f /dev/null
- Using kubernetes
apiVersion: v1
kind: Pod
metadata:
name: nsenter-alpine
spec:
hostPID: true
containers:
- name: nsenter-alpine
image: alpine:3.8
resources:
limits:
cpu: "500m"
memory: "200Mi"
requests:
cpu: "100m"
memory: "100Mi"
command: ["tail"]
args: ["-f", "/dev/null"]
securityContext:
privilege: true
in case 1)
/ # ps -ef | wc -l
604
in case 2)
[root@localhost /]# ps -ef | wc -l
266
Clearly when a privilege container is instantiated directly using docker then it is able to see processes of the host but when it is launched using kubernetes it can only see few of the processes. What is the reason behind it?
Edit:
I see you have
--pid=hostindocker runcommand andhostPID: truein kubernetes pod spec. In that case, both the numbers should be similar if the containers are running on same host. Check if the containers are running on same host or not. Kubernetes might have scheduled the pod to a different node.Prev answer
sudo docker run -d --privileged --pid=host alpine:3.8 tail -f /dev/nullIn the above command, you are using
--pid=hostargument which is running the container in host pid namespace. So you are able to view all the processes on the host. You can achieve the same withhostPIDoption in pod spec in kubernetes.Running a container in privileged mode means the processes in the container are essentially equal to root on the host. By default a container is not allowed to access any devices on the host, but a “privileged” container is given access to all devices on the host.
The container still runs in it's own pid namespace, ipc namespace and network namespace etc. So you will not see host processes inside the container even when running in privileged mode. You can use
hostPID,hostNetwork,hostIPCfields of pod spec in Kubernetes if you want to run in the host namespace.