Here is my problem statement,
I started a Kubernetes Service (Svc-1) which then starts Kubernetes Job (Job-1) and that job create a Pod (Pod-1). This service (Svc-1) actually receives job status like ACTIVE, READY, FAIL, SUCCESS of the job.
Svc-1 -> Job-1 -> Pod-1
Now, for some reason this Service (Svc-1) goes down and Kubernetes starts a new service (Svc-2). Now this service Svc-2 is not monitoring the currently running Job and also it can't receive the events generated by Job-1.
Svc-1 (DOWN) ! Job-1 -> Pod-1
I want to attach the new Service (Svc-2) to the running job so I could receive the events generated by the Job's Pod and record that in the database.
Svc-2 -> Job-1 -> Pod-1
Service File
apiVersion: v1
kind: Service
metadata:
name: svc-dev
labels:
app: svc-dev
spec:
type: ClusterIP
selector:
app: svc-dev
ports:
- name: http
protocol: TCP
port: 8080
Fabric8 Java Code to create Kubernetes Job
private JobDto createJob(KubernetesClient client, JobDto jobDto) throws JobExecutionException{
try {
MixedOperation<Job, JobList, ScalableResource<Job>> v1Job = client.batch().v1().jobs();
Job kubernetesJob = v1Job.load(resource.getInputStream()).get();
log.info("{} job has been loaded", jobType);
//Set name for job
kubernetesJob.getMetadata().setName(jobDto.getJobName());
log.info("Set name for job {}",jobDto.getJobName());
//Add label to job
String executionType = jobDto.getExecutionType().name();
Map<String, String> labels = MapUtils.of(LABEL1, jobType, LABEL2, executionType);
kubernetesJob.getMetadata().setLabels(labels);
log.info("Added labels for job {}", kubernetesJob.getMetadata().getLabels());
//Add label to pod
kubernetesJob.getSpec().getTemplate().getMetadata().setLabels(labels);
log.info("Added labels for job pod template");
//Add node selector role
kubernetesJob.getSpec().getTemplate().getSpec().setNodeSelector(getNodeSelectorRole(jobType));
log.info("Added NodeSelector for job pod template");
//Add service account to job
kubernetesJob.getSpec().getTemplate().getSpec().setServiceAccountName(config.getServiceAccount());
log.info("Added ServiceAccountName for job pod template");
//Add container configuration and volume is also mounted
kubernetesJob.getSpec().getTemplate().getSpec().setContainers(getContainers(jobDto));
log.info("Added Containers for job pod template");
//Add Volume
kubernetesJob.getSpec().getTemplate().getSpec().setVolumes(getVolumes());
log.info("Added Volumes for job pod template");
// Create Kubernetes Job
v1Job.inNamespace(config.getNamespace()).resource(kubernetesJob).create();
log.info("Created job with name {}",jobDto.getJobName());
return jobDto;
}catch(Exception exception ){
}
}
Deployment File
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.application.name }}
labels:
app: {{ .Values.application.name }}
spec:
replicas: {{ .Values.application.replicaCount }}
selector:
matchLabels:
app: {{ .Values.application.name }}
template:
metadata:
labels:
app: {{ .Values.application.name }}
spec:
nodeSelector:
role: {{ .Values.application.nodeSelectorRole }}
serviceAccountName: {{ .Values.application.serviceAccountName }}
serviceAccount: {{ .Values.application.serviceAccountName }}
volumes:
- name: {{ .Values.application.persistentVolume }}
persistentVolumeClaim:
claimName: {{ .Values.application.persistentVolumeClaim }}
containers:
- name: {{ .Values.application.name }}
image: {{ .Values.application.image }}
imagePullPolicy: {{ .Values.application.imagePullPolicy }}
ports:
- name: {{ .Values.application.portname }}
containerPort: {{ .Values.application.port }}
volumeMounts:
- name: {{ .Values.application.persistentVolume }}
mountPath: {{ .Values.application.containerMountPath }}
Using Helm so {{ .Values}}
are placeholders and will be replaced on runtime.
I am using Spring-Boot and fabric8 library.
Please guide me how can I achieve this.