Tag based action metadata

I have a custom docker action, and within the runs.image section of the metadata file (action.yml), I’m referencing an image on Docker Hub instead of a Dockerfile.

However the image I’m referencing is the one I build and push during the action release process. (The reason is because I don’t want the build to happen on the user end, prefer an image to be pulled and run directly; it is faster overall).

Now I’m wondering if theres a way to use a dynamic version in this field, eg image: "docker://.../image-name:$RELEASE_VERSION". Currently it’s hardcoded, so if users reference version v1.1 of my action they would still get the v1 version of the image. Which is not what I prefer.

I’m open to alternative ideas, also considered a workflow_dispatch type job that modifies + commits this file + create a new release. However that alternative has too many ifs, considering that a create-release action is no longer officially supported and I’m not sure if I can auto accept the market publishing ToS during the automatic release process. Which might require that I write a bespoke release action using the API instead.

As a general rule, if your Action is a Docker container action then the Dockerfile in the repository should be the file from which Docker is running. The direct relationship between the version of the Action (what the consumer is using) and the Docker image (what the Action is executing) is desired.

If for some reason, you want to pre-build the image and distribute it via a Package registry (e.g: Docker Hub, or GitHub Container Registry) then you can continue to use the Dockerfile and extend from the applicable version of the distributed image. For example:

Dockerfile:

FROM ghcr.io/username/repository:image-digest

action.yml:

runs:
  using: 'docker'
  image: 'Dockerfile'

You’re then free to update the FROM directive as you please, without changing the Action metadata.

There are always outliers so this isn’t an absolute, but during a lot of Action development I have never encountered a reason to require dynamic action metadata, so I’m inclined to believe there’s probably a better path to achieve your goals. If this answer didn’t help, please let me know more about what you’re trying to achieve and I can provide further insight.

Regarding releases: an Action is available to use just with a repository reference, the Marketplace is a place to advertise Actions but it is not a requirement – any reference to your repository will work. Each of these is valid, independent of whether or not they have an associated release.

uses: org/repository@v2
uses: org/repository@v2.0.0
uses: org/repository@0d888b7601af756fff1ffc9d0d0dca8fcc214f0a
uses: org/repository@main
1 Like

Thanks for taking the time to write an answer.

Ultimately I’m looking for an approach that allows me to do two things:

  • avoid the container build from taking place when an end user uses the action
  • automate the releases of my action without needing to touch files beforehand (except for the fixes/improvements themself)

For the former pushing the container to the Docker Hub registry worked quite well thus far (until I needed to do fixes on the action), as it’s faster to pull a ~100MB image than building it from the Dockerfile.

Because of my previous decision, the second requirement got complicated, due to the static nature of that specific section of the action.yml file. There are sections that can be dynamic (see composite actions); I just hoped I was missing some intricate aspect of the action.yml file and there was a non documented way to interpolate tag information.

For the moment I’ll just have to accept the additional maintenance burdern and manually change the action.yml file before each release to reference the new tag.

I think you might find it helpful to think more about how Docker works: specifically, layers. From an efficiency perspective, the “perfect” Docker image is composed of layers that are sorted based on stability: frequently changing layers are included as late as possible in the image – because when you break the cache of a layer, the cache for all subsequent layers is broken.

I assume that this question is in relation to your flake8-jupyter-notebook Action, so I’ll use that as an example.

FROM fedora:32

RUN dnf install --assumeyes --quiet python3 python3-pip nodejs && dnf clean all
RUN pip3 install flake8

COPY entrypoint.sh /entrypoint.sh
COPY annotate /annotate

ENTRYPOINT ["/entrypoint.sh"]

If you build this image and then change the contents of entrypoint.sh, Docker will not rebuild the first 3 layers: they’ll be loaded from the cache. If you build this image, then change the version of fedora to 33, Docker will rebuild every layer.

Therefore, you could build a base Docker image that contains your stable dependencies and push it to a registry, e.g:

FROM fedora:32

RUN dnf install --assumeyes --quiet python3 python3-pip nodejs && dnf clean all
RUN pip3 install flake8
docker build . -t mhitza/flake8:1.0.0
docker push mhitza/flake8:1.0.0

(note: in the real world, you’d have a Workflow that listens for changes to the flake8.Dockerfile to build and push changes – you would not do it manually.)

Then within your Action’s Dockerfile you can extend from the base Image and include your variable contents dynamically.

FROM mhitza/flake8:1.0.0

COPY entrypoint.sh /entrypoint.sh
COPY annotate /annotate

ENTRYPOINT ["/entrypoint.sh"]

Any time you make changes to annotate you do not need to rebuild the base image, and it’s only if new dependencies are needed that you would publish a new version of the base image and update the dependency within your Action’s Dockerfile. There would be no need for dynamic Action metadata, or automated version-replacement when releasing. The Runner would build the Action Image at runtime, which would mean Docker does the following steps:

  1. Downloading mhitza/flake8:1.0.0 from the registry (FROM mhitza/flake8:1.0.0)
  2. Adding the files to the Image (COPY entrypointCOPY annotate)
  3. Setting the Entrypoint

You’re now able to decouple the heavy (but stable) part of your Action’s Image from the light (but variable) part of your Action. An added benefit of this approach is that you’re deferring responsibility for downloading (and caching) the base Image to Docker, so an Action Runner with its own Docker cache will do even less work than if you sourced a pre-built image from a registry on each run.

Does that help? If anything is unclear, let me know!

1 Like

That’s true, I’ve been thinking of the container as a single packaged thing that bundles everything togheter in one go (a glorified archive). Might be because that way I can run it without changes as an action or pull the image locally and run it without any tweaks. But because that is not one of my requirements, and I doubt someone else might have relied on that approach. Will break it up per your suggestion, where the runtime is separated from the code itself.

Will not mark your answer as a solution, as something more specific to the original question might pop up within actions in the future.

Thank you for taking the time to write in depth replies.

1 Like