The workflows in your project are very compute heavy, for example this Workflow uses more than 3 hours worth of compute per run which would be charged at $2 per run if your project was private. During the last month you’ve used more than $500 worth of Action minutes (250 * 3 hours) and in the lifetime of your project you’ve likely used an order of magnitude more.
GitHub are very gracious in offering our open-source projects free use of Actions, and unfortunately there’s a lot of abuse from bad actors when it comes to free compute. I think the suggestion of jsoref is well worth considering: 3 hours of compute per Pull Request commit is very wasteful – if you can find a strategy to cut that down substantially it’ll be both better for your concurrency and better for the planet. For example, you could have a simplified Workflow that runs on every Pull Request and tests the things likely to fail, and then have a separate Workflow that is only ran when a maintainer adds a label to a Pull Request prior to merge for the full test suite.
If the limitations of GitHub’s free offering aren’t compatible with the needs of your project, you might find options in project sponsorship from businesses that can offer compute for you to use with self-hosted runners. DigitalOcean sponsor many projects for example, and there’s lots of other companies around doing the same. There’d be a little additional operational overhead but it’s relatively straight forward, and you can always fall back to GitHub’s own runners.
Another option would be to rent some high-resource servers yourself using funds raised from the community, for example Hetzner can provide you with 64GB memory, 1 GBit/s machines for around $50/month which would provide more than enough compute to run all your jobs in parallel.
Ultimately, GitHub are in a difficult position because of the abuse, and we’re very lucky that they offer as much as they do. I think if I was hitting the limits, I’d upgrade my organisation to the Team plan or even the Enterprise plan. The total cost (depending on the number of users) is likely to be cheaper (in time and money) than any self-hosted runner option.
All that said: as a general rule, hosted runners are good enough for most use-cases but when you are pushing the limits – resource usage, execution time, concurrency etc. – self-hosted runners do become a very attractive option. Personally, I prefer to simplify my workflows but I’ve worked in organisations where self-hosted runners with clever caching (possible due to owning the runner) have enabled high concurrency and removed all queueing – jobs finish before you have chance to check the status!