I want to break down a large job, running on a Microsoft-hosted agent, into smaller jobs running sequentially, on the same agent. The large job is organized like this:
pool:
vmImage: 'windows-latest'
jobs:
- job: large_job
steps:
- task: NuGetCommand@2
inputs:
command: 'restore'
- task: VSBuild@1
- task: VSTest@2
I want to break it down into two smaller jobs, like this:
pool:
vmImage: 'windows-latest'
jobs:
- job: job_one
steps:
- task: NuGetCommand@2
inputs:
command: 'restore'
- job: job_two
dependsOn: job_one
steps:
- checkout: none
- task: VSBuild@1
- task: VSTest@2
... so that downloading all of my binary resources will happen in job_one
, and the build and test will happen in job_two
.
Jobs include an optional parameter, workspace:clean
, which is used with self-hosted agents to specify whether the binary resources, build results, or everything should be erased from the agent. If the workspace:clean
parameter is omitted, nothing is erased -- everything is preserved. That's what I want.
However, according to the Workspace topic in the Azure Devops documentation:
The workspace clean options are applicable only for self-hosted agents. When using Microsoft-hosted agents, jobs are always run on a new agent.
This means that all of my binary resources are erased before job_two
can run the build task. I want to do the equivalent of this:
- job: job_two
dependsOn: job_one
workspace:
clean: none
steps:
- checkout: none
- task: VSBuild@1
- task: VSTest@2
I want to avoid using a self-hosted agent. How can I do this using a Microsoft-hosted agent?
You can't ever rely on the workspace being the same between jobs, period -- jobs may run on any one of the available agents, which are spread across multiple working folders and possibly even on different physical machines.
Have your jobs publish artifacts.
i.e.
And then in down-stream jobs,
However, what you're really looking for appears to be using a pipeline cache for your packages.