I created a job using openshift command;
oc create job <my-job> --from=cronjob/<my-cronjob>
, but it doesn't create a pod automatically. I could not find out what is missing here. Probably need to change something on my-cronjob
's yaml. So here is the yaml;
kind: CronJob
apiVersion: batch/v1
metadata:
annotations:
meta.helm.sh/release-name: xxx
meta.helm.sh/release-namespace: xxx
resourceVersion: '7504720716'
name: my-cronjob
uid: xxxx
creationTimestamp: '2023-02-17T14:27:25Z'
generation: 3
managedFields:
- manager: helm
operation: Update
apiVersion: batch/v1
time: '2023-04-19T09:06:08Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:meta.helm.sh/release-name': {}
'f:meta.helm.sh/release-namespace': {}
'f:labels':
.: {}
'f:app.kubernetes.io/managed-by': {}
'f:spec':
'f:concurrencyPolicy': {}
'f:failedJobsHistoryLimit': {}
'f:jobTemplate':
'f:spec':
'f:template':
'f:spec':
'f:containers':
'k:{"name":"my-job"}':
.: {}
'f:envFrom': {}
'f:image': {}
'f:imagePullPolicy': {}
'f:name': {}
'f:resources':
.: {}
'f:limits':
.: {}
'f:cpu': {}
'f:ephemeral-storage': {}
'f:memory': {}
'f:requests':
.: {}
'f:cpu': {}
'f:ephemeral-storage': {}
'f:memory': {}
'f:terminationMessagePath': {}
'f:terminationMessagePolicy': {}
'f:dnsPolicy': {}
'f:restartPolicy': {}
'f:schedulerName': {}
'f:securityContext': {}
'f:terminationGracePeriodSeconds': {}
'f:schedule': {}
'f:successfulJobsHistoryLimit': {}
'f:suspend': {}
- manager: kube-controller-manager
operation: Update
apiVersion: batch/v1
time: '2023-08-16T08:05:04Z'
fieldsType: FieldsV1
fieldsV1:
'f:status':
'f:lastScheduleTime': {}
subresource: status
- manager: Mozilla
operation: Update
apiVersion: batch/v1
time: '2023-12-05T09:10:40Z'
fieldsType: FieldsV1
fieldsV1:
'f:spec':
'f:jobTemplate':
'f:spec':
'f:template':
'f:spec':
'f:containers':
'k:{"name":"my-job"}':
'f:command': {}
namespace: ifxbertsearch-staging
labels:
app.kubernetes.io/managed-by: Helm
spec:
schedule: 0 1 * * *
concurrencyPolicy: Forbid
suspend: true
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- name: my-job
image: >-
xxx
command:
- python
- main.py
- '49'
envFrom:
- configMapRef:
name: env-variables
resources:
limits:
cpu: '4'
ephemeral-storage: 10Mi
memory: 5Gi
requests:
cpu: '2'
ephemeral-storage: 10Mi
memory: 3Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: Never
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
status: {}
So I need to figure out, why the cronjob doesn't create the pod on which the job will be working.
Your CronJob is suspended :
According to the documentation :
It also affects any Job created manually from that CronJob, so you need to remove that line in your CronJob ressource :)
Still, you should be able to see
my-job
in the list of Jobs within the concerned namespace, with the samesuspend
attribute.