Custom Conditions in Azure Pipelines break ability to cancel

59 Views Asked by At

I have two jobs in a stage in an azure pipeline

Job 1: deployTerraform

Job 2: deployAnsible

I have the following condition on my deployAnsible job:

            condition: |
              and(
                or(
                  succeeded(),
                  eq(dependencies.deployTerraform.result, 'Skipped')
                ),
                eq(${{ parameters.runAnsible}}, 'true')
              )

This condition logic works fine, but when its in place, I can't cancel the deployAnsible job - the stage only fully cancels once the job finishes. If I remove the condition and I cancel the stage, the job immediately cancels.

I've read this article and it eludes to this behavior but doesnt really explain how to handle it

3

There are 3 best solutions below

0
Medos On BEST ANSWER

My issue here turned out to be the conditionals I was using.

After iterating through a bunch of possibilities I simplified onto this:

dependsOn: DeployTerraform
condition: and( not(failed()), not(canceled()), eq(${{ parameters.runAnsible }}, true) )

when I switched to this conditional - the deployAnsible job now successfully cancels when the stage is cancelled.

So, I suspect that means the answer was that and( not(failed()), not(canceled()) handles the stage being canceled better than looking at the dependency result?

2
Bright Ran-MSFT On

Based on the statement for "Pipeline behavior when build is canceled".

If your condition doesn't take into account the state of the parent of your stage / job / step, then if the condition evaluates to true, your stage, job, or step will run, even if its parent is canceled. If its parent is skipped, then your stage, job, or step won't run.

Within a running stage, if a job has condition explicitly set by the 'condition' key, when you cancel the stage, whether this job will be canceled depends on whether it has started running.

  • If the stage is cancelled before the job gets started, this job will still run when the condition evaluates to 'true'.

  • If the stage is cancelled when the job is running, the job also will gets cancelled.


Below an example as reference.

# azure-pipelines.yml

stages:
- stage: A
  jobs:
  - job: A1
    . . .

# Stage B depends on stage A by default. So, it will implicitly check whether stage A is succeeded by default.
- stage: B
  jobs:
  - job: B1
    . . .
  
# Job B2 depends on B1 and has condition explicitly set.
  - job: B2
    dependsOn: B1
    condition: {the condition}
    . . .
  1. When stage A is running and stage B has not started, if cancel stage A, stage B gets skipped as stage A is not succeeded, and all jobs in stage B also get skipped.

    enter image description here

  2. In stage B, when job B1 is running and job B2 has not started running, if cancel stage B, stage B gets cancelled, job B1 gets cancelled, but job B2 will still start to run if the condition evaluates to 'true'. If the condition evaluates to 'false', job B2 will get skipped (not cancelled).

    enter image description here

  3. In stage B, when job B2 is running, if cancel stage B, stage B gets cancelled, job B2 also gets cancelled.

    enter image description here


0
Rui Jarimba On

In case you need to cancel one of the jobs it would probably be easier to use separate stages.

Or, if you need to choose which job(s) to run at queue time, use boolean pipeline parameters.

Running jobs in the same stage

The trick is to add boolean pipeline parameters for each job type and select which jobs to run when queuing a new build:

Pipeline with parameters

Pipeline code:

name: test-pipeline-$(date:yyyyMMdd-HHmmss)

parameters:
  - name: deployTerraform
    type: boolean
    displayName: 'Deploy Terraform?'
    default: true

  - name: deployAnsible
    type: boolean
    displayName: 'Deploy Ansible?'
    default: true

trigger: none

pool: Default

stages:
  - stage: deploy_stuff
    displayName: 'Deploy stuff'
    dependsOn: []
    jobs:
      - ${{ if parameters.deployAnsible }}:
        - job: deployAnsible
          displayName: 'Deploy Ansible'
          steps:
            - checkout: none
            - script: echo "Deploying Ansible"
              displayName: 'Deploy Ansible'
      - ${{ if parameters.deployTerraform }}:
        - job: deployTerraform
          displayName: 'Deploy Terraform'
          steps:
            - checkout: none
            - script: echo "Deploying Terraform"
              displayName: 'Deploy Terraform'

Running jobs in separate stages

This is the simplest solution - you can decide which stages to run when queuing a new build:

Queue pipeline

Select stages to run

Pipeline code:

name: test-pipeline-2-$(date:yyyyMMdd-HHmmss)

trigger: none

pool: Default

stages:
  - stage: deploy_terraform
    displayName: 'Deploy Terraform'
    dependsOn: []
    jobs:
      - job: deployTerraform
        displayName: 'Deploy Terraform'
        steps:
          - checkout: none
          - script: echo "Deploying Terraform"
            displayName: 'Deploy Terraform'

  - stage: deploy_ansible
    displayName: 'Deploy Ansible'
    dependsOn: deploy_terraform
    jobs:
      - job: deployAnsible
        displayName: 'Deploy Ansible'
        steps:
          - checkout: none
          - script: echo "Deploying Ansible"
            displayName: 'Deploy Ansible'

PS: Consider setting the timeoutInMinutes and/or the cancelTimeoutInMinutes properties of the job (see job timeouts)