How to limit concurrent workflow runs

I have created an action on exactly the same principle GitHub - robyoung/throttle: A GitHub action for serializing jobs across workflow runs and more with Google Cloud Storage
It automatically unlocks when the job finishes whether it was successful or not.

1 Like

Any news on this? In my particular case, I cannot have 2 workflows running concurrently on my self-hosted machine since the jobs need access to the GPU and if 2 jobs access the GPU then I get a crash.

What’s worse is that I notice if you have 2 workflow A and B, with jobs A-j1/A-j2 and Bj-1/Bj2 then you may end up with the following scenario:

  1. Job A-j1 executes
  2. Job B-j1 executes
  3. Job A-j2 executes

This completely fuck up my workflows flow since both workflows are working with the same $GITHUB_WORKSPACE

2 Likes

In my case I need this because we use Terraform for our AWS infrastructure and two builds can not be running at once, if they do one will fail with Error acquiring the state lock

As previously pointed out by @sue445 this is very easy on GitLab with resource_group

Any official update from the team?
Is this feature on the backlog?

We have recently shipped a feature that will enable you to configure workflow level concurrency for some scenarios GitHub Actions: Limit workflow run or job concurrency - GitHub Changelog.

11 Likes

Hey @chrispat

Is it possible with this new feature to apply the concurrency limit ONLY on the PR level, but not for the pushes to master?

Also, is it possible to have more than 1 pending jobs in the queue? Currently, if another one is queued, the previous one gets canceled. This is problematic for us.

Thanks

You could limit concurrency only in PRs through some trickery with expressions. Because our expressions are truthy something like concurrency: ${{ github.head_ref || concat(github.ref, github.run_number) }} should give you what you want.

At the moment we can’t have more than one in the queue but could you give me an example of your scenario for that?

Hey there,

It would be nice if these examples are provided in the github docs.

At the moment we can’t have more than one in the queue but could you give me an example of your scenario for that?

We do have a few pipelines that send out notification alerts if a pipeline fails or gets canceled. The current behavior is problematic for us, for this reason.

Also, these cancels would cause email alerts from GHA, which is also not ideal.

1 Like

At the moment we can’t have more than one in the queue but could you give me an example of your scenario for that?

Chiming in here: Our example is based on testing. We are severely limited on our hosting partner’s concurrent API calls and only have a single test site.

Therefore, we want to “stagger” the various tests, so they run 1-by-1, all PRs in that concurrency group patiently waiting for the one before it to be done.

That does mean (for example, with dependabot) there could be a PR storm, but they still need to all be processed (not just “the last one”).

1 Like