Docker push to GitHub Container Registry HTTP 504 Bad Gateway Timeout

Hi,
we are starting to build docker images with self-hosted github action runners on schedule every night and push them to the GitHub Container Registry. This was successful a few time now but sometimes the workflow fails to push the images (~9 GB size) to the registry.
I could not find anything worrying in the docker daemon log besides the “HTTP status: 504 Gateway Time-out”. The actions log shows

697949baa658: Layer already exists
935c56d8b3f9: Layer already exists
024fd6bb4e2e: Retrying in 5 seconds
024fd6bb4e2e: Retrying in 4 seconds
...
received unexpected HTTP status: 504 Gateway Time-out
Error: The process '/usr/bin/docker' failed with exit code 1

and retrys for a couple of minutes.

If I try to push manullay from the same server another HTTP error show up.

04f503a32151: Pushing [==================================================>]  10.74GB/10.74GB
1b3ee35aacca: Layer already exists 
error parsing HTTP 413 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>413 Request Entity Too Large</title></head>\r\n<body>\r\n<center><h1>413 Request Entity Too Large</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"

Is there an timeout option which can be adjusted?

Hi, you may have to change the timeout-minutes to something greater than 360 minutes.

While self-hosted runners are not subject to our concurrency or usage limits, my understanding is that the timeout-minutes still needs to be manually configured if users want the workflows to run beyond 360 minutes.

In addition, while I’m not too familiar with Docker, my colleague mentioned that setting --max-concurrent-uploads might also help.

The HTTP 413 error results in the payload being too large. It might be because users may experience degraded service publishing or installing Docker images larger than 10GB, layers are capped at 5GB each.