How to run a job from another job?

I have a workflow containing a job which test java code and build a docker image and then deploy the docker image and do integration testing

Very often I have to re-run the workflow because something went wrong with the integration testing. This can be a deploy failure or most often wrong test cases for the new code which has been build.

After correcting the integration tests, I need to re-run this workflow, but it should not be necessary to rebuild code when there already exists a docker image for this sha…

I would like to have a workflow which only builds the docker image when it is not present in the GitHub-packages. This is my present workflow:

I would like to have a job called verify which runs build as a step if the docker image cannot be found for the commit sha being built. When the step verify is finished (either if the docker image already exists or it has build a docker image), then the step called deploy_dev should run…

Is this possible? How?

@jactor-rises,

You can consider using caches in your workflow.

In the verify job, you can add a step to execute the cache action to check if the image build for the SHA is existing in the GitHub Packages. If the specified cache is existing (cache-hit = true), the cache restored. Otherwise, execute the step to build the image.

2 Likes

this sounds like a brilliant id. can u give me an example of how to do this?

@jactor-rises,

Here is a simple demo as reference:

jobs:
  verify:
    name: job verify
    . . .
    steps:
      - name: checkout code
        uses: actions/checkout@v2

      . . .

      - name: cache docker image
        id: cache-image
        uses: actions/cache@v2
        with:
          path: path/to/store/cache
          key: cache-docker-image-${{ github.sha }}

      - name: build docker image
        if: steps.cache-image.outputs.cache-hit != 'true'
        <run this step to build docker image>

  deploy_dev:
    name: job deploy_dev
    needs: verify
    . . .
    steps:
      <run the steps of job deploy_dev>

awesome. thx!!!

but why do I need: path/to/store/cache and not use GitHub packages as a cache look-up?

@jactor-rises,

The path is required and has two uses when using the cache action in the workflow:

  1. When the specified key doesn’t match an existing cache (cache-hit != true), in the “post cache” step, the cache action will create a new cache with the key to save files in the specified path, if this path is existing and not empty.
  2. When specified key matches an existing cache (cache-hit = true), the cache action will restore the cache files into the specified path so that the subsequent steps can use these cache files.

NOTE:
Generally, the workflow does not directly reference to the cache files from GitHub storage when some steps need using these files in the workflow. The cache action can find the required cache files according to the specified key, and restore the cache files from GitHub storage into the specified path on the runner machine. Then the subsequent steps can directly use these cache files from the local storage.

this implementation of a cached value do not work when cache (GitHub package) is on another machine, without me implementing and maintaining a cached key in my own workspace…

for easy caching an url should be all what is needed.
http code 401 or 403 (no access) -> fail workflow
http code 200 or 202 -> cached
other codes -> not cached

@jactor-rises,

If all the jobs in your workflow run on the GitHub_hosted runners, each job in the workflow executes in a fresh instance of the virtual machine. So, you need to use the cache action to restore the required dependencies in each job.

When building, testing and deploying the project, the dependencies should be existing in the project directory, it is required to restore/download the dependencies to the runner machine.

If every time when a step in the job needs to use the dependencies, setting the step directly fetch the dependencies from the GitHub storage, this may cause too many requests on the server and overload. Especially in a short time, multiple steps in the job have generated the requests.

Using the cache action to restore the dependencies to the local storage on the runner machine at first, can make the subsequent steps to access and use cached dependencies more quickly and steadily.

Even if you set all the jobs run on a same self-hosted runner machine, generally each step executes in the different working directory. The restored dependencies may will be cleaned after the job completed.

just a follow up…
my cache key will be the name and version of the docker image I will build for this workflow
from the cache GitHub page:

  - uses: actions/cache@v2
    with:
      path: | 
        path/to/dependencies
        some/other/dependencies 
      key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }}

so my key will be

${{ IMAGE }}

which is a global env-variable in my workflow and I have to make a path/to/dependencies which will be the docker image which is build (but is stored on GitHub packages). I really do not see why I need to store this docker image on my runner in addition to have it stored in GitHub packages…
how do I give the cache my local docker image on my self-hosted runner? I am using regular docker binaries to build my image with docker build --tag ${IMAGE} .

I just want to point out that my in-house docker-action navikt/bidrag-docker/exists@v1 works on my public repository, but I have a private repository which runs on self-hosted runner where this action do not work… I am now looking at how to implement caching with cache@v2

@jactor-rises,

Here are few points you need to know:

  1. On GitHub, the caches are not saved as GitHub Packages, caches are different with packages. There is separated storage to save caches on GitHub (limited to 5 GB for each repository).

  2. The cache action can only restore the files that saved as caches via the cache action from the GitHub. You can’t use this action to restore files from GitHub Packages.

  3. If you use a self-hosted runner to run some Docker container actions or some Docker commands, you should make sure that the required Docker environment has been setup on the runner machine, the Docker has been installed.