Unable to push to github repository

So I’m still very new to github, and this is our first project on here. for the last 2 days we’ve been trying to push a rather large commit(just over 2Gb) and we repeatedly get this error:
Enumerating objects: 434, done.
Counting objects: 100% (434/434), done.
Delta compression using up to 12 threads
Compressing objects: 100% (428/428), done.
error: RPC failed; curl 55 Send failure: Connection was aborted
fatal: the remote end hung up unexpectedly
Writing objects: 100% (429/429), 2.02 GiB | 731.00 KiB/s, done.
Total 429 (delta 6), reused 0 (delta 0), pack-reused 0
fatal: the remote end hung up unexpectedly
Everything up-to-date
ive been trying to find a solution but I’m not certain what’s causing the problem. i assume its the file size but we are using LFS but I’m worried I may not have set it up properly or im misunderstanding how to use it. any guidance is greatly appreciated!

Yes, 2GB is far above the limits: Conditions for large files - GitHub Docs

The log doesn’t say why the Git objects are still so large, one possible issue is that files that have been moved to LFS are still part of the history.

1 Like

See also this post regarding the limits of repository sizes, monthly data transfers, and maximum size allowed for a single file:

ah i see, sounds like i need to watch another LFS tutorial, still a little confused how to get it working/ to tell if its working. i appreciate the info.

ahhh i see, thank for the info, sounds like it might be worth getting a paid account.

I’m not sure that would make a huge difference. Surely, you’ll have more monthly bandwidth and disk space, but not all that much if you have a really huge repository at hand.

Version control in general, is not intended for tracking large binary files. Binary files should be added to a repository only when strictly needed, for they tend to slow down all Git operations.

If possible, you should consider excluding the very big binary files. E.g. you could .gitignore them, and add a script that downloads them (e.g. via cURL) from another source like some Could service working more like an online hard disk. These files won’t be then weighing on the repository size, nor slow down Git, but of course their changes won’t be tracked by Git either, so the same scripts that downloads them would have to take care of handling their updates somehow (e.g. by using some fingerprinting method that allows to quickly verify if the signature of the current local file is the same as the one in the cloud).

Of course, it really depends on what type of project it is, and what role those big binary files play.

Bear in mind that services like GDrive also allow synching large files between a local folder and the cloud (both ways). Since these are dedicated services, they do it better than version control tools that were developed with plaintext source files in mind. Chances are that you could mix both services inside a repository, by finding a smart solution.

very very interesting, thank you. sounds like i need to do some more reasearch. so far we were using github to remotely work on a game project and easily have the same updated files, but sounds like given the size of Unreal projects ill probably need to find another solution.

The topic of repository for video games (especially Unreal and Unity) often come up in this community, and usually in relation to problems with big assets and storage limits. You might want to search the community forum for similar posts, and possibly engage with the original posters to ask them how they solved the problem.

I guess that keeping assets updates in a video game dev repository is an important issue. But I still believe you can do it without tracking those files via Git directly, but tracking instead some text file with their info (e.g. a CSV or JSON file) and then use that info to ensure that they are automatically fetched from some other server (e.g. cloud storage) when they are updated.

Automating this won’t be too hard, and you can even use Git hooks to ensure that the scripts are launched before or after certain Git operations (e.g. after pulling), which would ensure that end users will always have updated assets on their local machines, even if they forget to run the scripts manually.

1 Like

This sounds basically like implementing your own alternative to Git LFS. :grinning_face_with_smiling_eyes: Before doing that I’d look at existing ones, like Git Annex (there are more). But in the end it depends on the exact needs, and what kinds of storage are available.

1 Like

I don’t know Git Annex, but it seems that for Win OS is still in Beta, and that it mainly focuses on local storages, although it also supports servers. But I’ll look into it, so thanks for the link (already bookmarked it).

From what I understood video games projects have specific needs. I personally, would use a cloud storage system like Google Drive (or whatever it’s now called), OneDrive, etc., which are designed to synch large files, and not too hard to integrate into a repository. But, as you said, the solution might depend on the specific needs of each project, and the personal preferences of its maintainers.

But this is indeed a topic which deserves more coverage, tutorials, articles, and visibility, for today these types of projects can really get big quickly, and probably need more space than most version control platforms are willing to provide.

2 Likes

thank you and @airtower-luna for all the great info :slight_smile: this should give me a solid starting point to finding a long term solution. luckily we were able to cut it down under the Gb limit and got it through for the short term. thank you guys again!

2 Likes