Trigger generated GitHub Action Workflow as part of generating GitHub Action Workflow

Short: Is it possible to run a GitHub Action which was generated partially by a parent GitHub Action for CI purposes?

Long: I have a use case where one repository is a generator of repositories. Specifically, it uses the Cookiecutter tech ( on top which formats Jinja templating to create a new skeleton repository.

I want the parent (generator) to use a GitHub Action to test all the generator options, then I would like to have the skeleton (generated) GitHub Action trigger which contains the CI components for the skeletal output repository. This will ensure that not only did I set up the Jinja correctly so the skeleton generates, but also ensures the output skeleton is actually usable.

Ideally, I would like to know if it’s possible to feed one GitHub Action into another after it was generated. However, I understand that this would be a nightmare from environment/OS/spawning standpoint as you could theoretically chain them indefinitely and have a huge matrix of build envs/os by the end.

Given the above, there are a couple alternatives which would work well enough I want to float and see if the community has any thoughts or ideas on how to do this.

One option would be to execute only the specific jobs of the output workflow as part of the parent. Effectively reading the partial that is the jobs and then executing them in the same parent environment. This relies on manual work from my end, but that is a-ok since it gets the job done.

Another option would be to create a custom Action at the generator level which then executes a lower level workflow in a container. I’m not sure how viable this is or if someone has already done this before.

The last option would be emulating a GitHub Action run. I know of one tool which can do this (, but I don’t think its quite what I need.

In theory, it would be possible to manually read the YAML file of the output workflow and parse it since I know what the output should be, but I want to avoid manually reading YAML if possible in case schema changes in the future (and I would have to parse all the workflow variables by hand). I’m also interested to hear if the community has other ideas for approaches too.


  1. A workflow only can run in the repository where the workflow YAML file is located. A repository can’t run a workflow defined in another repository.

  2. Currently, if we want to trigger a workflow run, we need to make some events occur in the repository where the workflow YAML file is located.

  3. Currently, the only methods we can trigger a workflow in a repository from another repository or external, is using the repository_dispatch event.

This is all contained within the same repository.

The basic steps of what I want the top level workflow (generator) to do are the following:

  1. Provision the OS, Env, and Generator conditions
  2. Generate the skeletal repository in a new directory
  3. Step into the created repository
  4. Run the workflow which was generated
  5. Report if everything worked

There is no actual new remote created on GitHub, its all self contained to make sure that when we publish the generator that the product it creates will work for the person who uses it.

If you do not have an actual repository on GitHub to execute the workflow, please NOTE:

  1. You can’t directly put the workflow file on the local machine to run. GitHub has some build-in features/services that can interpret/translate the YAML file and then tell the runner machine what commands need to be executed. So, you can’t run the workflow from a repository which is not located on GitHub, unless you can almost completely simulate the related features/services of GitHub.

  2. Simulating the related features/services of GitHub to interpret/translate the YAML file is quite complex. If you also try to translate the YAML file as some executable scripts on the machine by yourself, it also not an easy thing.

  3. If some secrets are needed in the workflow, you almost have no methods to simulate creating and storing secrets on GitHub repository. Similarly, it’s hard to simulate the contexts and GitHub’s default environment variables for the workflow.

  4. There also are some other technical barriers.

My suggestion:
Use the generator workflow to create an repository (skeletal in your case) with some source files and actually push it to GitHub to test the generated workflow.

  1. When pushing the skeletal repository to GitHub, you need to use a personal access token you create for more required permissions, not use the GITHUB_TOKEN. Because events triggered by the GITHUB_TOKEN will not create a new workflow run.

  2. In the generator repository, you can setup a workflow runs on repository_dispatch event.

  3. In the skeletal repository, you need to add an additional job at the bottom of the generated workflow. This job will wait for all the previous jobs completed in the same workflow, then fetch the results from the previous jobs and pass these results to the generator repository via creating a repository dispatch event to trigger the workflow on repository_dispatch event (as mentioned in item 2) in the generator repository. Here you also need using a a personal access token you create.

  4. After having tested the workflow in the skeletal repository, you can delete the skeletal repository if you no longer need it.

I figured there were some pretty heavy limitations based on the way the workflows were composed on the repository. Which is why I also asked about the alternative approaches which would suffice for testing purposes, even if they could not perfectly emulate everything.

The Generator itself exists as a repository and we are trying to add GitHub Actions as a replacement to our existing CI methods. I was hoping it might have been possible to read in job from a file mid-run, or at least execute the instructions within them, but knew that was a bit of a long shot (this would be tech related to issues I have seen where jobs are re-used across workflows without having to copy the full instructions in every workflow.)

I like the idea of pushing the skeletal generated one to another repository. I think it would be possible to leverage the API to effectively query output and then pass/fail the Workflow on the Generator side, although would be a bit complicated. The one complication I have with that though is the fact that this is CI, which rely heavily on starting from scratch. The ideal steps there would be

  1. Create a new empty repository though the API
  2. Push the skeleton to the new repository
  3. Query the repository for the actions to run and see if they all pass
  4. Delete the repository

Which I’m not sure if you can do all that with just the GitHub API alone. The alternate would be to push to existing repositories, one per different Generator CI condition, and then effectively reset the repository though force pushes or something after every CI. I imagine that could be done with the API, but I feel sort of breaks the spirit of Git in the first place.

I do appreciate the suggestions, and its given me some ideas. I knew that an ideal, or even easy good-enough solution was a bit of a long shot, but it was something I wanted to ask about in case there was some tech, or already-built Action I missed in my search along the way. Feel free to share any other thoughts on this matter, but I’ll mark your previous answer as the “solution” since its probably the closest thing this use case would get.


this would be tech related to issues I have seen where jobs are re-used across workflows without having to copy the full instructions in every workflow.

This is quite similar with the Template feature for YAML Pipelines on Azure Pipelines. It is including Step templates, Job templates, Variable templates, etc…

However, currently GitHub Actions does not support Template feature. I noticed some users had reported the related feature requests before. you also can directly report the feature request here. That will allow you to directly interact with the appropriate engineering team, and make it more convenient for the engineering team to collect and categorize your suggestions.

This is quite similar with the Template feature for YAML Pipelines on Azure Pipelines .

I knew that was a thing capable on Azure Pipelines, I’ve used them before there. However, this is different because the template it would be reading is generated at run time, in the middle of the parent Workflow. Whereas the Azure Pipelines tech relies on reading existing files from some source. My suspicion is that Azure reads those directives as a pre-processor, then pieces together what it needs to run before it actually tries to deploy any runners. I could be wrong about this, but I suspect this is different enough a request.