Downloading an Artifact that Does not Exist Yet

I have what I feel is a fairly simple situation where I would like to cache my Python dependencies within my virtualenv in venv/. I might look into actions/cache@v2 for the ~/.cache/pip dependencies, but for now, all I care about is caching everything in venv/.

---
name: deploy
on:
  push:
    branches:
      - master
  pull_request:
    branches:
      - master
  workflow_dispatch:

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: actions/setup-python@v2
        with:
          python-version: 3.9.5
      - run: pip install virtualenv
      - uses: actions/download-artifact@v2
        with:
          name: pip
      - run: virtualenv venv/
      - run: source venv/bin/activate
      - run: pip install -r requirements.txt
      - uses: actions/upload-artifact@v2
        with:
          name: pip
          path: venv/
      - run: ansible --version
      - run: ansible-galaxy install -f -p .ansible/galaxy-roles -r requirements.yml
      - run: ansible-playbook -u ansible -C playbooks/configure.yml
        env:
          ANSIBLE_FORCE_COLOR: "true" 

In a nutshell, I want to download the venv folder if it exists, run a pip install -r requirements.txt to install/update dependencies if necessary, store venv folder as an artifact, and then continue on to the rest of my work.

I’ll eventually be splitting this up into its own job, maybe I’ll call it prepare and it will run before everything else so it can be decoupled, but that’s not urgent right now.

When I run actions/download-artifact@v2, I get an error that the artifact does not exist yet:

Error: Unable to find any artifacts for the associated workflow

It seems rather arbitrary to have to run my workflow once with actions/download-artifact@v2 commented out and then run it again with it uncommented to get the cache populated once, and if this build does not run for 90 days, it will presumably fail as the artifacts expire.

I am new to GitHub actions: what is the design pattern for doing this? I feel like I’m trying to do something fairly simple but running into a roadblock, and there’s no if-no-files-found directive on actions/download-artifact@v2.

How do I try to restore an artifact and ignore errors if it does not already exist?

1 Like

Technically you can cheat and use:

if: always()

to force your later steps to run. This way your first run might technically “fail”, but it will “prime” your fake cache for the second run.

Formally, github clearly wants you to use actions/cache instead.

Yes, this is the idea and was the solution. I was confused as to how actions/cache worked, I didn’t understand how inserting it at one step would accomplish storing the cache at the end of the run, but apparently it hooks into the entire run and saves the cache at the end.

Solution:

---
name: deploy
on:
  push:
    branches:
      - master
  pull_request:
    branches:
      - master
  workflow_dispatch:

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2

      - uses: actions/setup-python@v2
        with:
          python-version: 3.9.5

      - uses: actions/cache@v2
        env:
          cache-name: python
        with:
          key: "${{ runner.os }}-build-${{ env.cache-name }}-${{ hashFiles('requirements.txt') }}"
          restore-keys: |
            ${{ runner.os }}-build-${{ env.cache-name }}
            ${{ runner.os }}-build
            ${{ runner.os }}
          path: |
            ~/.cache/pip
            ./venv/

      - run: pip install virtualenv
      - run: virtualenv venv/
      - run: source venv/bin/activate
      - run: pip install -r requirements.txt
      - run: ansible --version
      - run: ansible-galaxy install -f -p .ansible/galaxy-roles -r requirements.yml
      - run: ansible-playbook -u ansible --private-key .private/ssh-keys/ansible.key playbooks/configure.yml
        env:
          ANSIBLE_FORCE_COLOR: "true"

Formally you want the cache early:

Specifically, it relies on post to perform the cache writes.