Can't add Kubernetes backend pool to outbound rule in Azure Load Balancer created automatically for AKS cluster

516 Views Asked by At

I have created an AKS cluster (Kubernetes version 1.24.6) using Pulumi Azure Classic KubernetesCluster provider what resulted in creating AKS cluster with the following network profile:

{
    "networkProfile": {
        "dnsServiceIp": "10.0.0.10",
        "dockerBridgeCidr": "172.17.0.1/16",
        "ipFamilies": [
            "IPv4"
        ],
        "loadBalancerProfile": {
            "allocatedOutboundPorts": 0,
            "effectiveOutboundIPs": [{
                    "id": "/subscriptions/<subscription_id>/resourceGroups/<rg-id>/providers/Microsoft.Network/publicIPAddresses/<ip-id>",
                    "resourceGroup": "MC_rg-gw-aks-test00e1a208_aks-gw-konsolidator-test_westeurope"
                }
            ],
            "enableMultipleStandardLoadBalancers": null,
            "idleTimeoutInMinutes": 25,
            "managedOutboundIPs": {
                "count": 1,
                "countIpv6": null
            },
            "outboundIPs": null,
            "outboundIpPrefixes": null
        },
        "loadBalancerSku": "Standard",
        "natGatewayProfile": null,
        "networkMode": null,
        "networkPlugin": "azure",
        "networkPolicy": "calico",
        "outboundType": "loadBalancer",
        "podCidr": null,
        "podCidrs": null,
        "serviceCidr": "10.0.0.0/16",
        "serviceCidrs": [
            "10.0.0.0/16"
        ]
    }
}

my main goal was to have a static egress IP for the traffic outgoing from the cluster and enforce higher than default idleTimeoutInMinutes. Autogenerated Azure Load Balancer has Standard tier. I'm using ingress-nginx as ingress controller. However, after deploying the cluster and performing some troubleshooting I have noticed that:

  1. Egress IP is different than I would expect (tested by hitting curl -s checkip.dyndns.org from a pod inside the AKS cluster) - it doesn't match any of the IPs specified in Frontend IP configuration (in particular it doesn't match the one mentioned in networkProfile.effectiveOutboundIPs). In general the outbound traffic is working, it isn't blocked, only long-running TCP traffic without keepalives is killed silently (without TCP RST) - I've checked it for requests lasting 5 minutes. I'm not using any other firewalls/network components outside of the AKS cluster apart from the Load Balancer.
  2. In Azure Load Balancer there is an outbound rule "aksOutboundRule" automatically created after creation of AKS cluster that uses one of the Frontend IP addresses defined in Frontend IP Configurations (that is the expected egress IP), but it has an empty (with 0 instances) backend pool assigned to it. Moreover, if I try to add "kubernetes" backend pool there manually, it is greyed out: enter image description here a. When creating a brand new outbound rule it is also not possible to configure "kubernetes" backend pool for it as it is also greyed out. b. It is also not possible to add any IP configurations for the nodes from the kubernetes cluster to any other backend pool (new one or existing one), as the saving of the backend pool fails with bad request and the following details:
{
    "status": "Failed",
    "error": {
        "code": "MinimumApiVersionNotSpecifiedToSetTheProperty",
        "message": "Specified api-version 2020-12-01 does not meet the minimum required api-version 2022-03-01 to set this property skuOnPublicIPAddressConfiguration.",
        "details": []
    }
}

Does anyone have any clue what am I missing? What can be the cause of the issue I'm facing?

I've looked into MS documentation, especially into:

and I haven't found any explanation why kuberenetes backend pool may be greyed out or why mentioned errors can be thrown.

1

There are 1 best solutions below

0
On

We had this exact same problem. What helped us was to set enable_node_public_ip to false on the node. Hope that works for you!