How can I increase Github Action concurrency?

Hello! I’m the creator and maintainer of GitHub - blitz-js/blitz: ⚡️The Fullstack React Framework — built on Next.js and we use Github Actions for running a lot of tests in PRs.

The problem I have is that actions running on one PR prevent actions from running on other PRs. They are marked as “queued” and don’t run until the first PR finishes.

How can I fix this? It’s super painful as the maintainer of a large project :sweat_smile:

Thank you,

Your repository appears to have a lot of workflow jobs that offhand appear to collectively take a while and probably don’t need to be run for all commits.

Consider paths-ignore and friends:

Almost all of them actually do need to run for all PRs, because we keep docs in a separate repo.

There may be a slight bit of optimization we can do, but nothing to really move the needle.

You might try for a month changing your subscription level and seeing if those limits help.

Alternatively, if you want to throw more time and money at the problem, you could set up self-hosted runners.

(Disclaimer: We’re paying for Team, we pay for extra minutes, and we’re using self-hosted runners.)

But, I’d really suggest looking to see if there are ways you can speed up your workflows. – This week, I’m looking into getting one of my actions to only check “changed” files.

We are an open-source project with very limited funding. So I’m hoping for another answer than start paying :slight_smile:

With a complex framework like this, you can just run only certain integration tests based on files changed, because one file change can have unexpected affects elsewhere.

Yeah, I get it. In general, I’d hope that one month wouldn’t be terrible.

If you’re up for it, I’d probably be willing to sponsor you for the month to cover it.

(Unrelated, I do plan to send you a PR.)

I’m slowly working on things in this general area (which is why I’m looking at posts here). – I reworked my action so that it uses both CPUs on the Windows/Linux runners which nearly halved its running time. And I’m looking into teaching my action how to be intelligent about file changes (at the risk of not being able to perform some of its analysis – everything’s a trade-off).

The workflows in your project are very compute heavy, for example this Workflow uses more than 3 hours worth of compute per run which would be charged at $2 per run if your project was private. During the last month you’ve used more than $500 worth of Action minutes (250 * 3 hours) and in the lifetime of your project you’ve likely used an order of magnitude more.

GitHub are very gracious in offering our open-source projects free use of Actions, and unfortunately there’s a lot of abuse from bad actors when it comes to free compute. I think the suggestion of jsoref is well worth considering: 3 hours of compute per Pull Request commit is very wasteful – if you can find a strategy to cut that down substantially it’ll be both better for your concurrency and better for the planet. For example, you could have a simplified Workflow that runs on every Pull Request and tests the things likely to fail, and then have a separate Workflow that is only ran when a maintainer adds a label to a Pull Request prior to merge for the full test suite.

If the limitations of GitHub’s free offering aren’t compatible with the needs of your project, you might find options in project sponsorship from businesses that can offer compute for you to use with self-hosted runners. DigitalOcean sponsor many projects for example, and there’s lots of other companies around doing the same. There’d be a little additional operational overhead but it’s relatively straight forward, and you can always fall back to GitHub’s own runners.

Another option would be to rent some high-resource servers yourself using funds raised from the community, for example Hetzner can provide you with 64GB memory, 1 GBit/s machines for around $50/month which would provide more than enough compute to run all your jobs in parallel.

Ultimately, GitHub are in a difficult position because of the abuse, and we’re very lucky that they offer as much as they do. I think if I was hitting the limits, I’d upgrade my organisation to the Team plan or even the Enterprise plan. The total cost (depending on the number of users) is likely to be cheaper (in time and money) than any self-hosted runner option.

All that said: as a general rule, hosted runners are good enough for most use-cases but when you are pushing the limits – resource usage, execution time, concurrency etc. – self-hosted runners do become a very attractive option. Personally, I prefer to simplify my workflows but I’ve worked in organisations where self-hosted runners with clever caching (possible due to owning the runner) have enabled high concurrency and removed all queueing – jobs finish before you have chance to check the status!


Wow, thanks for the very thorough response. That really helps me understand the landscape here.

One question, you and @jsoref mentioned upgrading to a paid plan — does the paid plan increase concurrency across PRs? I cannot find any information on this on the github website and docs.

I haven’t reached this limit myself so I can only speak about what’s documented: GitHub state under Actions > Usage limits, billing, and administration that concurrency is per account (in your case, organisation) and that the free plan is limited to 20 concurrent jobs while the enterprise plan is limited to 180 concurrent jobs.

A relatively cheap and simple way to verify would be to create a test repository with a workflow that uses a matrix strategy to trigger 256 jobs and try it while on different plans. If you do move forward with trying different plans then reporting back on the results would be much appreciated as it’ll be helpful to see how Actions perform with heavy Workflows in different plans :slight_smile:

1 Like

Oh sweet thanks, I was not able to find that page.