- Having many (most?) of the actions available in the marketplace not work on Windows or Mac environments and having no way to filter on that is insane
- The caching storage limits are unreasonably low for multi-platform C++ projects with any significant number of dependencies
- The self-hosting feature is nearly worthless if it doesn’t also include the ability to run on images with the same software setup as Github Actions VMs
My company is looking at moving away from Jenkins CI to another system. I’ve been promoting Github Actions as a replacement for a variety of reasons
- Well documented
- Good ecosystem with a marketplace for actions
- Virtual environments well stocked with build tools
- Support for caching*
- Support for self-hosting*
I first noticed this when I started using Actions on a personal project and discovered that there were no “starter” actions for CMake based projects. In fact the only C/C++ based starter action was for autoconf based projects, which is only really viable on Linux systems, and is really kind of outdated in the modern C++ ecosystem. I didn’t worry too much about it. In fact I decided to pitch in by contributing a CMake starter action.
This is downright crazy. Imagine if you downloaded Steam for Mac, and when you browsed the store, you couldn’t filter on games that only ran on mac. Instead you’d have to click on a game, then click on some detail page on the game’s main store page, and only then did you know if you could actually play the game on your Mac. That’s basically how the Actions marketplace work right now. You can browse actions, but you can’t filter out Docker based actions and you can’t tell if an action is docker based unless you go to the action page on the marketplace and then click through to the action repository to see if it has a
Dockerfile in it.
Two of the other pain points I’ve experienced with Github Actions have been the underpowered machines and the lack of caching functionality. My build process has a large number of dependencies, most of which are managed through vcpkg. In our normal build process the vcpkg folder is cached from run to run, so the aggregate time cost building our is zero. Without caching, building our dependencies adds around an hour to the build time. I didn’t worry too much about this because both caching and self-hosting were promised as upcoming features. Now they’ve arrived and both have turned out to be nearly worthless for my use case.
The caching limits are unreasonably low… 2 GB of storage per repository. We build Windows, Mac, Linux and Android clients currently, all from the same repository. The vcpkg build artifacts for Windows alone would blow past that 2 GB limit easily, and that doesn’t even count our non-vcpkg dependencies like Qt.
Because the build VMs are only 2 core machines, our builds take much longer than they currently do on our Jenkins systems. I had hoped self-hosting would allow us to re-purpose our existing machines so that we could take advantage of Github Actions while not losing performance. I had assumed that the self-hosting solution would involve some mechanism to install an image locally that would have all the same software as the existing GA VMs. Apparently that’s not the case. Instead I just run an agent on a machine I manage myself. Much of the appeal of switching to Github Actions was avoiding the maintenance hassle of maintaining and updating software on the build machines. But with the current self-hosting solution I end up still having to do that. I also lose the build-to-build isolation that’s available from workflows running on the VMs.
I would dearly love to be told I’m wrong about any of this and that there’s an obvious solution I’m not seeing to, you know… any of my problems. Right now it feels like Github Actions is great if I’m building some Node.js based application specifically to run on some backend server somewhere. If I’m actually writing complex C++ client code that’s intended to run on a variety of platforms, it’s siginficantly less useful.