I'm running a small AKS Kubernetes cluster on version 1.24 that needs to be upgraded to 1.25 soon. We are using terraform to provision resources like cronjobs etc. More precisely, cronjobs are created through the terraform resouce kubernetes_cron_job.
Because of the removal of apiVersion: batch/v1beta1 in Kubernetes 1.25, we want to switch to the resource kubernetes_cron_job_v1.
The problem I currently have is that there seems to be a difference between the resource description in kubectl and the Kubernetes Dashboard.
kubectl get cronjob cronjob-name -o yaml:
apiVersion: batch/v1
kind: CronJob
metadata:
creationTimestamp: "..."
generation: 1
name: ...
namespace: ...
resourceVersion: "..."
uid: ...
spec:
concurrencyPolicy: Replace
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
backoffLimit: 2
completions: 1
manualSelector: false
parallelism: 1
template:
metadata:
creationTimestamp: null
spec:
automountServiceAccountToken: true
containers:
- command:
- ...
- ...
image: ...
imagePullPolicy: Always
name: tms-api
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/secrets
mountPropagation: None
name: secrets
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
shareProcessNamespace: false
terminationGracePeriodSeconds: 30
volumes:
- name: secrets
secret:
defaultMode: 256
optional: false
secretName: ...
ttlSecondsAfterFinished: 10
schedule: 35 * * * *
startingDeadlineSeconds: 10
successfulJobsHistoryLimit: 1
suspend: false
status:
lastScheduleTime: "..."
lastSuccessfulTime: "..."
Kubernetes Dashboard:
kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: ...
namespace: ...
uid: ...
resourceVersion: '...'
generation: 1
creationTimestamp: '...'
managedFields:
- manager: HashiCorp
operation: Update
apiVersion: batch/v1
time: '...'
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:concurrencyPolicy: {}
f:failedJobsHistoryLimit: {}
f:jobTemplate:
f:spec:
f:backoffLimit: {}
f:completions: {}
f:manualSelector: {}
f:parallelism: {}
f:template:
f:spec:
f:automountServiceAccountToken: {}
f:containers:
k:{"name":"tms-api"}:
.: {}
f:command: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:volumeMounts:
.: {}
k:{"mountPath":"/etc/secrets"}:
.: {}
f:mountPath: {}
f:mountPropagation: {}
f:name: {}
f:readOnly: {}
f:dnsPolicy: {}
f:enableServiceLinks: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:shareProcessNamespace: {}
f:terminationGracePeriodSeconds: {}
f:volumes:
.: {}
k:{"name":"secrets"}:
.: {}
f:name: {}
f:secret:
.: {}
f:defaultMode: {}
f:optional: {}
f:secretName: {}
f:ttlSecondsAfterFinished: {}
f:schedule: {}
f:startingDeadlineSeconds: {}
f:successfulJobsHistoryLimit: {}
f:suspend: {}
- manager: kube-controller-manager
operation: Update
apiVersion: batch/v1
time: '2024-01-02T09:35:06Z'
fieldsType: FieldsV1
fieldsV1:
f:status:
f:lastScheduleTime: {}
f:lastSuccessfulTime: {}
subresource: status
spec:
schedule: 35 * * * *
startingDeadlineSeconds: 10
concurrencyPolicy: Replace
suspend: false
jobTemplate:
metadata:
creationTimestamp: null
spec:
parallelism: 1
completions: 1
backoffLimit: 2
manualSelector: false
template:
metadata:
creationTimestamp: null
spec:
volumes:
- name: secrets
secret:
secretName: ...
defaultMode: 256
optional: false
containers:
- name: tms-api
image: >-
...
command:
- ...
resources: {}
volumeMounts:
- name: secrets
readOnly: true
mountPath: /etc/secrets
mountPropagation: None
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: Never
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
automountServiceAccountToken: true
shareProcessNamespace: false
securityContext: {}
schedulerName: default-scheduler
enableServiceLinks: true
ttlSecondsAfterFinished: 10
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
status:
lastScheduleTime: '...'
lastSuccessfulTime: '...'
In the output from Kubernetes Dashboard, the main apiVersion is still batch/v1beta1. In the kubectl output, the job is on batch/v1.
To make it a little bit more confusing : When inspecting a cronjob not yet migrated to kubernetes_cron_job_v1, everything seems already to be ok :
apiVersion: batch/v1
kind: CronJob
...
I don't want to launch a Kubernetes upgrade as long as the output seems to be contradictory. So I'm wondering, which version here is correct.
I'm using the following versions:
$ terraform --version
Terraform v1.6.6
on linux_amd64
+ provider registry.terraform.io/hashicorp/azurerm v3.85.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.24.0
Any hints or insights into this would be greatly appreciated!
Thanks and best regards,
Pascal
EDIT: Output of kubectl api-resources --api-group=batch :
NAME SHORTNAMES APIVERSION NAMESPACED KIND
cronjobs cj batch/v1 true CronJob
jobs batch/v1 true Job