iptables rules to permit GitHub Actions from breaking (chains default to DROP)

593 Views Asked by At

Assuming my iptables rules default to DROP on the INPUT and OUTPUT chains, what is the bare minimum set of rules that I must add to my chains to prevent a script running in GitHub Actions from stalling indefinitely?

I'm using (free) GitHub Actions for my open-source application's CI/CD infrastructure. When I push changes to github.com, it automatically spins up an Ubuntu 18.04 linux server in Microsoft's cloud that checks-out my repo and executes a BASH script to build my application.

For security reasons, early on in my build script I install and setup some very restrictive iptables rules that default to DROP on the INPUT and OUTPUT chains. I poke a hole in the firewall for 127.0.0.1, RELATED/ESTABLISHED on INPUT, and only permit the _apt user to send traffic through OUTPUT.

This works great when I run the build script in a docker container on my local system. But--as I just learned--when it runs with GitHub Actions, it stalls indefinitely. Clearly, the instance itself needs to be able to communicate out to GitHub's servers in order to finish. And I appear to have broken that.

So the question is: what -j ACCEPT rules should I add to my iptables INPUT and OUTPUT chains to only permit the bare necessities for GitHub Actions executions to proceed as usual?

For reference, here's the snippet from my build script that sets-up my firewall:

##################
# SETUP IPTABLES #
##################

# We setup iptables so that only the apt user (and therefore the apt command)
# can access the internet. We don't want insecure tools like `pip` to download
# unsafe code from the internet.

${SUDO} iptables-save > /tmp/iptables-save.`date "+%Y%m%d_%H%M%S"`
${SUDO} iptables -A INPUT -i lo -j ACCEPT
${SUDO} iptables -A INPUT -s 127.0.0.1/32 -j DROP
${SUDO} iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
${SUDO} iptables -A INPUT -j DROP
${SUDO} iptables -A OUTPUT -s 127.0.0.1/32 -d 127.0.0.1/32 -j ACCEPT
${SUDO} iptables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
${SUDO} iptables -A OUTPUT -m owner --uid-owner 100 -j ACCEPT # apt uid = 100
${SUDO} iptables -A OUTPUT -j DROP

${SUDO} ip6tables-save > /tmp/ip6tables-save.`date "+%Y%m%d_%H%M%S"`
${SUDO} ip6tables -A INPUT -i lo -j ACCEPT
${SUDO} ip6tables -A INPUT -s ::1/128 -j DROP
${SUDO} ip6tables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
${SUDO} ip6tables -A INPUT -j DROP
${SUDO} ip6tables -A OUTPUT -s ::1/128 -d ::1/128 -j ACCEPT
${SUDO} ip6tables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
${SUDO} ip6tables -A OUTPUT -m owner --uid-owner 100 -j ACCEPT
${SUDO} ip6tables -A OUTPUT -j DROP

# attempt to access the internet as root. If it works, exit 1
curl -s 1.1.1.1
if [ $? -eq 0 ]; then
        echo "ERROR: iptables isn't blocking internet access to unsafe tools. You may need to run this as root (and you should do it inside a VM)"
        exit 1
fi
2

There are 2 best solutions below

0
On BEST ANSWER

This can be achieved by running your build script in a docker container and applying your iptables rules inside that container, which won't affect the host runner's connectivity.

For example, if the below script is executed in a GitHub Actions job (in the Ubuntu 18.04 GitHub shared runner), it will run the build script (docker_script.sh) in a debian docker container that has no internet connectivity, except from the _apt user.

#!/bin/bash
set -x

###################
# INSTALL DEPENDS #
###################

apt-get -y install docker.io

##################
# DOWNLOAD IMAGE #
##################

# At the time of writing, Docker Content Trust is 100% security theater without
# explicitly adding the root public keys to the $HOME/.docker/trust/ directory
#
#  * https://github.com/BusKill/buskill-app/issues/6#issuecomment-700050760
#  * https://security.stackexchange.com/questions/238529/how-to-list-all-of-the-known-root-keys-in-docker-docker-content-trust
#  * https://github.com/docker/cli/issues/2752

docker -D pull debian:stable-slim

#################
# CREATE SCRIPT #
#################

tmpDir=`mktemp -d`
pushd "${tmpDir}"

cat << EOF > docker_script.sh
#!/bin/bash
set -x

# SETTINGS #
SUDO=/usr/bin/sudo

# DEPENDS #
${SUDO} apt-get update
${SUDO} apt-get install iptables curl

# IPTABLES #

# We setup iptables so that only the apt user (and therefore the apt command)
# can access the internet. We don't want insecure tools like `pip` to download
# unsafe code from the internet.

${SUDO} iptables-save > /tmp/iptables-save.`date "+%Y%m%d_%H%M%S"`
${SUDO} iptables -A INPUT -i lo -j ACCEPT
${SUDO} iptables -A INPUT -s 127.0.0.1/32 -j DROP
${SUDO} iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
${SUDO} iptables -A INPUT -j DROP
${SUDO} iptables -A OUTPUT -s 127.0.0.1/32 -d 127.0.0.1/32 -j ACCEPT
${SUDO} iptables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
${SUDO} iptables -A OUTPUT -m owner --uid-owner 100 -j ACCEPT # apt uid = 100
${SUDO} iptables -A OUTPUT -j DROP

${SUDO} ip6tables-save > /tmp/ip6tables-save.`date "+%Y%m%d_%H%M%S"`
${SUDO} ip6tables -A INPUT -i lo -j ACCEPT
${SUDO} ip6tables -A INPUT -s ::1/128 -j DROP
${SUDO} ip6tables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
${SUDO} ip6tables -A INPUT -j DROP
${SUDO} ip6tables -A OUTPUT -s ::1/128 -d ::1/128 -j ACCEPT
${SUDO} ip6tables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
${SUDO} ip6tables -A OUTPUT -m owner --uid-owner 100 -j ACCEPT
${SUDO} ip6tables -A OUTPUT -j DROP

# attempt to access the internet as root. If it works, exit 1
curl 1.1.1.1
if [ $? -eq 0 ]; then
        echo "ERROR: iptables isn't blocking internet access to unsafe tools. You may need to run this as root (and you should do it inside a VM)"
        exit 1
fi

# BUILD #

# ...
# <DO BUILD HERE>
# ...

exit 0
EOF
chmod +x docker_script.sh

##############
# DOCKER RUN #
##############

docker run --rm --cap-add "NET_ADMIN" -v "${tmpDir}:/root/shared_volume" debian:stable-slim /bin/bash -c "cd /root/shared_volume && docker_script.sh"

# exit cleanly
exit 0

Note that:

  1. You have to execute the docker run command manually rather than just specify the container: in the GitHub Actions yaml file in order to add the NET_ADMIN capability. See Also How to run script in docker container with additional capabilities (docker exec ... --cap-add ...)

  2. This is a security risk unless you pin the root signing keys before calling docker pull. See also https://security.stackexchange.com/questions/238529/how-to-list-all-of-the-known-root-keys-in-docker-docker-content-trust

  3. The above script should be executed as root. For example, prepend sudo to it in the run: key for the step in the GitHub Actions workflow.

0
On

What you're doing isn't going to work reliably, and you should adopt a different solution. In order for this to work, you need to know the layout of the network on which you're operating and the user under which the relevant GitHub Actions process runs, and GitHub neither documents nor guarantees the consistency of that setup.

As a result, even if you did figure out a solution, GitHub might break it by running your code in a new datacenter or otherwise on a different network, or by changing the user under which their processes run, or some other attribute relevant to your setup.

If you're worried about the code you're running downloading things you don't want, it's better to configure it not to do that, to not run the code in the first place, or to verify that the code you intended is run. For example, in a Rust program using a C library, I might verify that the binary is dynamically linked to the system library instead of having had Cargo build its own version if that's a concern to me. If you only want to use system packages, you could configure any per-language package managers to look at localhost for their mirrors, which would fail if they tried to reach the Internet, and even test that if you wanted.