Is that possible to run job which takes more than 6 hours on self-hosted runner?

In my CI/CD setup, one step is to retrain a deep learning model. We setup our own GPU-box as a self-hosted runner to run this step. However, the training takes about one day, which is much longer than the current 6-hours cap GitHub Actions allowed. Is there any way to workaround this limitation? 

Yes, you can use self-hosted runner to run more than 6 hours job.

  • Self-hosted runners give you the opportunity to persist whatever you like for your jobs and not be subject to the six-hour time-out in hosted virtual environments.

Please refer to this official blog:

This is interesting. My test shows the jos is killed after 6 hours even running on self-hosted runner. 

Here is a snapshot of the summary of the run. As it shows, the job was killed after 6 hours.

Just in case one might suspect the job had not been run on self-hosted runner. Here is the screenshot of the shell where I run the self-hosted runner agent. As can be seen from the timestamps, the job (the 3rd and 4th ones) were killed after 6 hours.

The maximum number of minutes to let a job run before GitHub automatically cancels it. In default is 360 minutes. 

Please try to specify a larger minutes to timeout-minutes under your job.

1 Like

I was very confused. It didn’t work for me.  

Even “Both jobs. <job_id> .steps.timeout-minutes” and “jobs. <job_id> .timeout-minutes” be set to 7200,  The jobs was still canceled by GitHub automatically.

My YAML file: