Accessing Private GKE control plane from a self-hosted GH Actions runner

70 Views Asked by At

I am trying to understand how to get the self-hosted github runner to deploy to the private GKE cluster.

So far i got the runner to start and install the desired set of tools, including kubectl however, whenever i try to execute any command with the cli, i get timeouts. Apparently there is something to do with the networking, but i cant find the answer.

The pipeline looks like:

  deploy:
    name: Deploy
    runs-on: arc-runner-set
    environment: production

    permissions:
      id-token: write
      contents: read
      actions: read

    steps:
      - id: auth
        uses: "google-github-actions/auth@v2"
        with:
          credentials_json: "${{ secrets.GCP_CREDENTIALS }}"

      - uses: azure/setup-kubectl@v3
        with:
          version: "latest"

      - uses: google-github-actions/get-gke-credentials@v2
        with:
          cluster_name: ${{ env.GKE_CLUSTER }}
          location: ${{ env.GKE_ZONE }}

      - id: "get-pods"
        run: "kubectl get pods"

I do have a private Autopilot GKE cluster with allowed global access to the control plane. Am i able to run kubectl from the localhost (my laptop) as i added the IP into Authorized Networks section when created the cluster.

Is there something to configure for the cluster or the runner to let the latter to access the control plane?

The messages i get atm:

E0313 04:58:11.751123     108 memcache.go:265] couldn't get current server API group list: Get "https://34.28.33.63/api?timeout=32s": dial tcp 34.28.33.63:443: i/o timeout
Unable to connect to the server: dial tcp 34.28.33.63:443: i/o timeout
Error: Process completed with exit code 1.

Where 34.28.33.63 is the public endpoint for the cluster. This can be retrieved with gcloud beta container clusters describe

0

There are 0 best solutions below