Use same container in multiple jobs

I have a setup were i have multiple jobs running with the same container image like the example below.

jobs:
    job1:
        runs-on: ubuntu-latest
        container:
            image: user/custom-image
        steps:
        - uses: actions/checkout@v1
        - run: ./script1
          shell: bash
     job2:
        needs: job1
        runs-on: ubuntu-latest
        container:
            image: user/custom-image
        steps:
        - uses: actions/checkout@v1
        - run: ./script2
          shell: bash

In this example the image is downloaded from dockerhub for every job running. Since the real implementation has way more steps and the image is quite large this adds significant time to the pipeline.

Is there anyway to cache the container in the runner or to just run it locally from the runner instead of pulling it all over?

2 Likes

No - since each job runs on different runners, each runner needs to download the image.

If you ran a single job with multiple steps, the image would only download once.  But each individual virtual machine will need to have a copy of the image, so it will need to be downloaded by each of them.

1 Like

Thank you for the reply!
Is there any plan to change / improve this? Or to run multiple jobs on the same virtual machine?

This is the definition of a job - it’s isolated from other jobs, and gets a fresh virtual machine instance every time for security and repeatability.  There are no plans to change this.

It sounds like you may want different steps within the same job?

Hello,

I wanted a different job, because after I release I call some curl requests to the website and they sometimes fail.

If release and curl are in the same job we will see a false negative status.
When I want granularity.

Screenshot 2021-12-14 at 13.25.03