Is it necessary to add IRSA to aws-auth config map for the corresponding pod to be able to update kube objects

164 Views Asked by At

I have a service running inside the EKS cluster which reads/adds/updates/patches different kubernetes objects across multiple namespaces. For this to work, I did the following:

  • Create an IAM Role => service_account_role
  • Attached arn:aws:iam::aws:policy/AdministratorAccess policy to it
  • Create a ServiceAccount inside the cluster which is annotated with eks.amazonaws.com/role-arn: ${service_account_role}
  • Create a ClusterRole with rules to give proper access for performing required operations
  • Create a ClusterRoleBinding to bind the above created ClusterRole and ServiceAccount

On running the application, I get error as: User \"system:anonymous\" cannot patch resource (403) (Here I was trying to patch an Ingress object to update annotation using python kubernetes client using the method read_namespaced_ingress)

Next, I tried adding the service_account_role to aws_auth config map with group system:masters and the application started running fine without any issues.

So my questions are:

  1. Despite having all the required rules setup as a part of ClusterRole which is bound to the pods ServiceAccount, why is it still required to map the relevant role using the aws-auth config map?
  2. Isn't this map only used to access the AWS resources outside the cluster?
  3. What is the system:anonymous group? I couldn't find any good source of information regarding this.
  4. Is it possible to move the IRSA to system:masters without touching aws-auth config map? Patching this map concurrently is big pain which could happen as I need multiple services/IRSAs which would require such permissions.
0

There are 0 best solutions below