Image/Artifact Promotion Between Environments

I have read a lot of articles/blogs as well as completed the https://lab.github.com/githubtraining/github-actions:-continuous-delivery-with-aws course and I am confused about docker image promotion from staging environment to production environment.

From what I can tell, people are generally rebuilding the docker image between the different environments, and thus technically deploying different artificats between staging and production. Normally, I would think that a docker image should be created and then deployed to a staging environment, and then that same image is then promoted/deployed to the production environment. That way it is known that the images are truly the same.

Is this something that is encouraged to be done using GitHub Actions? Is there a way to promote/deploy an already existing image to a production environment?

@madiganz ,

You can try using the caches to share data between different workflows.

In your case, you can try to cache the build of the image and share it betweent environments in the workflows.

@brightran is this the recommended practice when using something like Gitflow workflow, or is the recommended practice to create different artifacts for each environment (branch)?

@madiganz ,

Actually, I don’t think there are some widely recommended or standard practices about how to create artifacts for different environments.
How you need to do is based on what is your request.

For example, when you build your program with different environments, sometimes the dependent libs or tools may be different due to some reasons (such as different OSs, different supported tool versions), you may need to create the build artifacts in each environment. So that you can test and deploy your program in each environment in the subsequent processes.

When you build your program in a common environment, and no special different configurations for different environments. In this situation, you may just need to create one build artifacts for the subsequent testings and deployments processes.

Currently, we have two methods to pass/share data between different processes (jobs, workflow runs) for GitHub Actions workflows:

1) Artifacts – can be used to share data between different jobs in a same workflow run.

  1. Caches – can be used to share data between different jobs, and different workflow runs.

Caching the docker image seems like it would generally work for my use case. However, it would only stay in the cache for 7 days, so if our testing on the image lasted for 8 days, then the build image would be gone.

I have also looked into using Pull Request labels to specificy the docker image tag. This seems to work okay, but the problem is the build runs for the head branch and not the base branch. So if I merged release into master, this job would run for release that has access to the docker image tag, but a different job would actually test the code in master. I believe this could result in a deployment when the master build workflow fails.

@madiganz ,

GitHub will remove any cache entries that have not been accessed in over 7 days.

This does not mean the caches are only retained 7 days. It just removes the caches have never been accessed in the passed 7 days.

If the last time you (or via workflows) access the cache is in 7 days, the cache will be retained. Of course, the total size of all caches in your repository should not be exceed 5 GB.