Parallelism in Self-Hosted Runners

Hi everyone,

We’re just starting to use GH Actions to enable some continuous deployment to an AWS ECS Fargate cluster. We deploy a node container whose build time is eating into our minutes, so we’ve deploy self-hosted runners to give a bit more horse power to the build and cuts it in about a third. (as a quick aside–I have to to commend the GH folks here setting up the self-hosted runners we super straightforward and worked basically the first time. kudos!!!)

It seems with the default config outlined here it sets up the runner to run a single job at a time. I was wondering if there is a configuration out there that would allow the runner service to take more then one job off the queue at once and thus get a bit of easy-to-stand-up virtical scaling, and if not is this or something like a configuration that is more condusive to AWS AutoScalingGroups where the instances are more ephemeral be something on the roadmap?

Thanks!

-James

Hi @james-m ,

If you want to use self-hosted runners to run jobs in parallelism, even run multiple workflows in parallelism, I recommend you add multiple self-hosted runners.

There are several notes as reference:

  • One self-hosted runner can only run one job at a time, when no available runners are idle, the subsequent jobs will be in queueing until have available runners are idle.
  • If have enough available self-hosted runners for a workflow run, each job will be assigned a self-hosted runner, and the jobs will run in parallelism.
  • When multiple workflows are running, if the workflows are more than the available self-hosted runners, each workflow will be assigned only one self-hosted runner.

@james-m I’m using circle-ci to run test cases with  parallelism.  Now I would like to move from Circle-ci to Github Action.

  1. Can I use existing circle-ci config.yml for GitHub Action?

  2. How to convert /make my existing config.yml compatible with GitHub Actions?

  3. Does GitHub having an inbuilt script which automatically provisions runners in the client aws/azure/GCP account on-demand basis (spot/OnDemand/reserved)

Hi James,

Can you please explain how you managed to SSH into ECS in order to install the runner? I thought you are not allowed to login to ECS through shell.

Hi @agunescu

So we actually never got around to deploying the runners in ECS Fargate… this was something of a “nice to have” for us since we had a working solution with plain ol’ EC2 instances.

My understanding of docker is that you _can_ set it up to have an sshd running within the container, but this is somewhat of an anti-pattern. We don’t deploy in this config and while it there have been times when debugging an issue by getting a shell into a container would have been useful for us, it’s been far better to deploy on Fargate as we don’t need to worry about managing & hardening live site VMs.

Hope this was at least a little helpful and good luck!

-James

is it possible to configure multiple self-hosted runners in a same machine? Any hints?