Building and packaging multiple libraries in a single repo?

I’m new to github actions and would like to get some insight on this.

I have a repository made specifically to host the dependencies of a project that i’m working on,
Those are supposed to be installed by the user using their package manager,
but i provided the repo for ease of access, so the user doesn’t have to go hunting for the packages online.
It also makes building on Windows easier.

What i’m trying to do now is to setup a GitHub action(s)
to automate updating (the submodules), building and deploying
pre-built binaries to GitHub releases (again, a convenience thing!).

So, i’m wondering how should i go about this?

Should i have a a separate workflow file/config per library? like:

flac.yml
libsndfile.yml
...

Each of which would update the submodule, build the library and deploy.

A separate workflow for each step. Updating, building, and releasing. Each of which does it’s thing for all of the libraries?

Or stuff everything in a single workflow file, with separate jobs for updating, building, and releasing each library?

Couple of “things”:

  • It would be preferable that each library gets processed in a separate job, because:

    • They can be processed in parallel
    • A single failed library won’t blow the whole thing up.
    • And, a failure in one of the libraries would be easier to spot!
  • I want all of the binaries to go into a single release, like one release in the repo that has multiple files, one for each library. So that, for example, the user can download only the libraries that he has missing.

  • The ability to automate the whole thing would be really helpful. i.e: The ability to track a branch on upstream (a release branch, from example), and when pushed to/updated, trigger the workflow. Is there such a thing?

Any ideas/pointers on how should i proceed would be really appreciated.

It’s really up to you. All of your considerations and options make sense.

Just make sure that by doing so end users don’t end up with conflicting packages on their OS (e.g. a package that was installed by them being overridden by the version installed by your scripts, etc.), but this is something which is potentially different on each OS.

Keeping the automation jobs separate is better, as you said, because it allows parallel executions, and also because you can control the exit status of each job separately, in case things go wrong.

Sometimes it’s good to set that any failed job stops the execution of the whole CI, to prevent wasting resources, since the whole task will have to be run again in any case (unless these are smart jobs that don’t get executed if the CI cache is already up to date).

You really have freedom of choice in how to go about this, so you should probably stick with what your instinct suggests and with whatever approach you are most comfortable with. Guidelines and best practices are precious, but they are not set in stone either — your project, your rules; your users, their needs, that’s what really matters.

E.g. the general rule that one should not include precompiled binaries in Git repos is indeed sound advice. But sometimes building some libraries it’s just too much of a pain for end users, and it’s justified to include a DLL in the repository. Also, today there are PDF files or JPG images which are much bigger than a DLL (and they are binary files too), so the rule was devised more as a wisdom that anyhow end users should be building those binaries by themselves, in most cases.

1 Like