How to raise ulimits on the github runner

I am using a library that makes use of locked memory (GitHub - awnumar/memguard: Secure software enclave for storage of sensitive information in memory.) and would like to raise the limits set by the default docker environment. Is that possible?

e.g. with ulimit -l unlimited

 ulimit: max locked memory: cannot modify limit: Operation not permitted

on the runner:

Run ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 27785
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 16384
cpu time               (seconds, -t) unlimited
max user processes              (-u) 27785
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

It would be ideal if ulimit -l 65536 or greater

1 Like

I also have issues with this. io_uring is getting more popular and I need to test it, but it needs a raise of max locked memory…

It sounds like you’re looking to change the ulimit settings for a job container? It’s not something I’ve done myself, but from the docs jobs.<job_id>.container.options should let you do it. From inside the container it shouldn’t be possible due to limited capabilities and syscalls.

Thanks for the hints. From what I gather it’s something like this:



    name: Linux Rust Stable
    runs-on: ubuntu-latest

      options: "--ulimit memlock=512:512"

However, I wasn’t using docker so that didn’t work. Any idea on how to raise that when not using docker?

In that case you should be able to use sudo to acquire the necessary rights, see Administrative privileges of GitHub-hosted runners. Note that the ulimit shell command is a built-in, not a binary, so you’ll have to run a shell with sudo and run your commands in that shell. Though in most cases running as root will remove all limits anyway.

1 Like

I tried this with:

sudo bash -c "ulimit -l 512"

and the command succeeds, but AFAICT this requires a logout/login cycle to take effect. In any case the limit isn’t raised.

The setting affects the running shell and processes started from there, that’s why I wrote above you’d have to start your processes from the same shell.

What happens here is:

  1. sudo starts bash as root.
  2. The Bash instance runs the command you gave it: ulimit -l 512
  3. New limit for locked memory applies to the current Bash instance.
  4. There are no more commands, so the Bash instance terminates.
1 Like