Random unknown blob error when pushing to Docker Github Package repository

I want to preface this issue by saying we as an organization have over 5 Billion pulls from Dockerhub. I am not down right refusing to try things but I am near 100% confident this is a specific infrastructure issue with Github. 

If anyone else is having this issue please bump this thread, even if it only happened once and a subsequent retry worked.

We recently implemted pushes to Gitlab, Quay.io, and Github along with Dockerhub. 

When we push our images we do so in a loop with near identical logic for all of the endpoints. 

Github has a realitively high failure rate since implementation a few days ago ( 5 out of our last 50 builds ), here are some example failures: 




The problem is random and the error is always the same unknown blob

This exact random problem plagued Dockerhub for a brief time during an upgrade: 


Officially they never wrote a post mortem, but unoficially it was related to load balancers being a little to aggressive and the eventual consistent model not working as frontend machines were not aware of the object that was just uploaded. 

Please let me know if you need anymore information. 


We have also been experiencing this issue with the GitHub package repository for Docker, and arrived at the exact same conclusion. The more steps a Dockerfile contains, the more layers it has, and the more likely it is to run into a problematic outcome of this race condition.

Today, a 15-layer image is failing about about 25% of the time.

1 Like

Same here, with the same ratio of about 25%.

1 Like

I’m seeing it more like 75% of the time…just failed on two new workflow runs in a row…

1 Like

Quick touch on this cropping up a lot in the last few days. Tonight I’ve had several workflow run failures due to the dreaded unknown blob


So many unknown blob on docker push I’m not sure I can release docker images anymore.

Another one getting common for me is getaddrinfo EAI_AGAIN registry.npmjs.org error on build stage, so maybe it has something to network?

1 Like

And again, we have unknown blob for like 80-85% of docker pushes from GitHub Actions


And again, been getting unknown blob on all builds this morning…

1 Like

Happens a lot as well. Is there anything that can be done to mitigate this?

1 Like

Problem began to appears 2 weeks ago (very few builds would fails), a bit more since (3 of ten maybe), but today, almost every build ends up failing

The issue has raised too big and fast today, problems comes from GH ?

Could be noted, but manual push seems less prone to fail

1 Like

We have been seeing the same issue for the last few days.

Is github addressing this community wide issue?


I have begun seeing this issue in the last few days as well. However, the failure is happing on 80% of the builds.

1 Like

We are having the same issue since yesterday. Completely random, happens 30-50% of the time. This is very frustrating.

1 Like

Yep! Same here. It’s happened like 10 builds in a row.

1 Like

It can’t get through a single push right now.

This has basically incapacitated development on a project I am working on, as we were relying on Github Packages as part of our CI.

My work around right now is to go to self-hosted runner that is doing the push and to spam docker push until it gets all of the layers.

This does work if you are in under a deadline and can’t switch out package repos right now. If you’re running your actions on a GA VM you may be out of luck.

I had been putting off setting up a local package repo but now it is unavoidable. Absolutely must have stability in package storage in the CI.

FWIW, Docker Hub itself apparently experienced issues with unknown blob in October.

They have yet to offer a postmortem.


docker push to github docker package registries has become 100% unusable over the last hour.


Do we have any word on this? I’ve got customers that are relying on a push I’ve made this morning that’s yet to see production.

It would be nice to get some acknowledgment of the issue. It looks like githubstatus.com does not test push operations against the service.

It looks like there was a scheduled maintenance four days ago, but otherwise no incident report despite this ongoing issue.

Still having the problem today.

This is crippling all of our deployment processes - still haven’t found any tracker or issues replied by GH staff yet.

I’m having the same issue here :frowning:
Did anyone find a workaround?