I have a k8s deployment which consists of a cron job (runs hourly), service (runs the http service) and a storage class (pvc to store data, using gp2).
The issue I am seeing is that gp2 is only readwriteonce.
I notice when the cron job creates a job and it lands on the same node as the service it can mount it fine.
Is there something I can do in the service, deployment or cron job yaml to ensure the cron job and service always land on the same node? It can be any node but as long as cron job goes to the same node as service.
This isn't an issue in my lower environment as we have very little nodes but in our production environments where we have more nodes it is an issue.
In short I want to get my cron job, which creates a job then pod to run the pod on the same node as my services pod is on.
I know thing isn't best practice but our webservice reads data from the pvc and serves it. The cron job pulls new data in from other sources and leaves it for the webserver.
Happy for other ideas / ways.
Thanks
Focusing only on the part:
You can spawn your
Cronjob
/Job
either with:nodeSelector
nodeAffinity
nodeSelector
Example of it could be following (assuming that your node have a specific label that is referenced in
.spec.jobTemplate.spec.template.spec.nodeSelector
):Running above manifest will schedule your
Pod
(Cronjob
) on a node that has aschedule=here
label:$ kubectl get pods -o wide
nodeAffinity
Example of it could be following (assuming that your node have a specific label that is referenced in
.spec.jobTemplate.spec.template.spec.nodeSelector
):$ kubectl get pods
Additional resources:
I'd reckon you could also take a look on this StackOverflow answer: