How do I make Azure DevOps's Pipeline Cache store its result even if a step fail (like tests)?

2.2k Views Asked by At

Background: Pipeline caching allows one to store a folder and re-use it on the next build, given that some keys, branches and whatnot matches. Useful for node_modules, nuget packages and Git LFS, but also for builds

I can't find anything in the Pipeline Caching docs for this.

I want to save the cache even if the build fails, for incremental builds or Git LFS checkouts, such as the following scenario:

  • new feature branch checked in
  • a big refactor, thus many changes
  • a few tests fail, so cache is not stored
  • fixing them are trivial, and require small recompile, but the entire build needs to be re-run the entire pipeline failed.

The example list something like this:

- task: Cache@2
  inputs:
    key: 'yarn | "$(Agent.OS)" | yarn.lock'
    restoreKeys: |
       yarn | "$(Agent.OS)"
       yarn
    path: $(YARN_CACHE_FOLDER)
  displayName: Cache Yarn packages

But it only ever caches if the entire pipeline succeeds.

2

There are 2 best solutions below

3
On BEST ANSWER

How do i store Azure DevOps pipeline cache even if a step fail (like tests)?

After a period of investigation and discussion, I am afraid there is no such task/feature to store Azure DevOps pipeline cache even if a step fail.

According to the document Pipeline caching, this task are used to help reduce build time by allowing the outputs or downloaded dependencies from one run to be reused in later runs, thereby reducing or avoiding the cost to recreate or redownload the same files again.

It does not parse the file or the build log. So, even if we cached the file generated by the failed pipeline, we cannot use that file that was not generated correctly or not generated during the next time. It seems very difficult to extract the correct information in a failed build.

Hope this helps.

0
On

I had the same problem with a code quality tool which has a valid cache to store, whether it fails or succeeds.

I fixed it with 2 jobs.

The first one is your original job with the cache task and you add steps at the end to publish your cache as a pipeline artifact. These additional steps should have condition: failed() so that you do this only if the cache task won't store its result.

The second job depends on the first one with condition: failed(). It also has the cache task, and you download your artifact to it so that the cache task this time stores the result.