Using environment secrets with deployment trigger causes infinite deployment loop

I have workflows set up using the on: deployment trigger to perform deployment tasks when a deployment is created via API. These workflows need environment-specific secrets, and have so far been accomplishing this by using this format:

  FOO: ${{ secrets[format('{0}_FOO', env.ENVIRONMENT_UPPER)] }}

One limitation of this approach is the maximum of 100 repository secrets that must be split between all environments. The new environment-scoped secrets are very interesting to me because it increases the limit to 100 secrets per environment. It also allows me to stop storing secrets as [ENV]_FOO and instead do the following, where each environment has a different value for FOO:

on: deployment

    runs-on: ubuntu-20.04
      name: ${{ github.event.deployment.environment }}
      - name: Deploy to ${{ github.event.deployment.environment }}
          FOO: ${{ secrets.FOO }}
        run: echo "Deploying..."

However, adding an environment to the job seems to create a new deployment, which ends up triggering the workflow again in an infinite loop. (After being triggered 10 times, I was eventually able to cancel a workflow fast enough to break the loop.)

How can I use environment-scoped secrets in these deployment workflows without it creating a new deployment? Do I need to refactor my entire process to use the repository_dispatch event instead of deployment?


How do you create a deployment ? Is it through a single workflow ? multiple workflow ? an external tool ?

If you have a single workflow, then you may want to remove the deployment creation and add a job which reference the environment directly in your workflow.

if multiple workflow can create deployment, then if the deployment execution is a simple use of a single action, you may create a new job in each of the workflow

If you have some complicated step, you may want to do otherwise. I can think of 2 options:

  • A workflow based on workflow_run. No need to have a specific token, but data you can get from original workflow is pretty slim
  • Use of a repository_dispatch: You can set your parameter has payload but you have to use a specific github token if you want to create one from your workflow.

If you create deployment from the outside, I think transforming this to a repository_dispatch event is the simpler move.

Anyway in all this use case, the main thing is: do not manage deployment by yourself, let the environment manage this for you.

Just hit this as well.

I couldn’t stop the loop so eventually had to commit a breaking change to the workflow file.

Note this isn’t specifically about environment secrets. Just adding the environment: metadata to a job causes it.

If environments aren’t supported in deployment workflows then it should at least not go into an infinite deployment loop.

That said environments go hand-in-hand with deployments (e.g. ability to add a protection rule for prod deployment, or pass env vars to a deployment) so seems super strange not to support them for this event type.

For ref I’m creating a deployment via CURL, and the example job that goes into a loop below.

name: Deploy

on: deployment

    name: Deploy to development
    if: github.event.deployment.environment == 'development'
       name: development
    runs-on: ubuntu-latest
      - name: Checkout code
        uses: actions/checkout@v2

      - name: Deployment pending
        uses: deliverybot/deployment-status@v1
          state: "pending"
          token: ${{ secrets.GITHUB_TOKEN }}

      - name: Deploy
        run: |
          # deployment script here
          sleep 20;
          echo "deployment complete"

      - name: Deployment success
        if: success()
        uses: deliverybot/deployment-status@v1
          state: "success"
          token: ${{ secrets.GITHUB_TOKEN }}

      - name: Deployment failure
        if: failure()
        uses: deliverybot/deployment-status@v1
          state: "failure"
          token: ${{ secrets.GITHUB_TOKEN }}

I am facing exactly similar problem. Have you found any solution to resolve this issue?