Forbidden error using paritioned job with Spring Cloud Data Flow on Kubernetes

235 Views Asked by At

I want to implement a remote partitioned job using Spring Cloud Data Flow on Kuberentes. The Skipper server is not installed because I just need to run tasks and jobs.

I modified the partitioned batch job sample project using spring-cloud-deployer-kubernetes instead of the local one, as suggested here.

When the master job tries to launch a worker I get the "Forbidden" error below in the pod logs:

io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://10.43.0.1/api/v1/namespaces/svi-scdf-poc/pods/partitionedbatchjobtask-39gvq3p8ok. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "partitionedbatchjobtask-39gvq3p8ok" is forbidden: User "system:serviceaccount:svi-scdf-poc:default" cannot get resource "pods" in API group "" in the namespace "svi-scdf-poc". 
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:589) ~[kubernetes-client-4.10.3.jar:na] 
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:526) ~[kubernetes-client-4.10.3.jar:na] 
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:492) ~[kubernetes-client-4.10.3.jar:na] 
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:451) ~[kubernetes-client-4.10.3.jar:na] 
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:416) ~[kubernetes-client-4.10.3.jar:na] 
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:397) ~[kubernetes-client-4.10.3.jar:na] 
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:890) ~[kubernetes-client-4.10.3.jar:na] 
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:233) ~[kubernetes-client-4.10.3.jar:na] 
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:187) ~[kubernetes-client-4.10.3.jar:na] 
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:79) ~[kubernetes-client-4.10.3.jar:na] 
    at org.springframework.cloud.deployer.spi.kubernetes.KubernetesTaskLauncher.getPodByName(KubernetesTaskLauncher.java:411) ~[spring-cloud-deployer-kubernetes-2.5.0.jar:2.5.0] 
    at org.springframework.cloud.deployer.spi.kubernetes.KubernetesTaskLauncher.buildPodStatus(KubernetesTaskLauncher.java:350) ~[spring-cloud-deployer-kubernetes-2.5.0.jar:2.5.0] 
    at org.springframework.cloud.deployer.spi.kubernetes.KubernetesTaskLauncher.buildTaskStatus(KubernetesTaskLauncher.java:345) ~[spring-cloud-deployer-kubernetes-2.5.0.jar:2.5.0] 

In my understanding, it should be correct that the master job pod tries deploy the worker pod, so it seems to be just a permission problem, or is the Skipper server required?

If my assumptions are correct, should I just configure SCDF to assign a spefic service account to the master pod?

1

There are 1 best solutions below

0
On

Ran into the same issue for partitioned-batch-job, but saw these options in the official documentation to specify service account at app level and server. I tried the app level one (via SCDF dashboard task launch properties) and it worked. I just specified the service account created by the SCDF helm deployment. Made me wonder why this was not used by default though, and required me to specify this again when launching app (ie. shouldn't the server level service account be set to that by default). The pod logs showed the k8s 'default' service account was being used when launching.