How to use docker-compose with GitHub Actions?

Hi!

I am trying to use GitHub actions to automate my test pipeline, but cannot seem to get the containers to run in order to test them. I am running a django webapp in one container and postgres in another and tying the two with docker-compose.

My docker-compose.yml file is

version: '3'

services:
web:
container_name: backend
build: .
volumes:
- ~/app_name:/code
ports:
- "8000:8000"
environment:
- PORT=${PORT}
- DJANGO_SETTINGS_MODULE=${DJANGO_SETTINGS_MODULE}
depends_on:
- postgres

postgres:
container_name: postgres
image: postgres
restart: always
environment:
- POSTGRES_USER=${PG_USER}
- POSTGRES_PASSWORD=${PG_PASS}
- POSTGRES_DB=${PG_DB}
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data

volumes:
pgdata:

and my GitHub actions file is:

name: Docker Image CI

on: [push]

jobs:

build:

runs-on: ubuntu-latest
env:
DATABASE_URL: ${{ secrets.DATABASE_URL }}
DJANGO_SETTINGS_MODULE: app_name.settings.dev
SECRET_KEY: ${{ secrets.SECRET_KEY }}
PORT: 8000
PG_DB: ${{ secrets.PG_DB }}
PG_PASS: ${{ secrets.PG_PASS }}
PG_USER: ${{ secrets.USER }}

steps:
- uses: actions/checkout@v1
- name: Build the docker-compose stack
run: docker-compose up -d
- name: Sleep
uses: jakejarvis/wait-action@master
with:
time: '60s'
- name: Check running containers
run: docker ps
- name: Run test suite
run: docker exec backend pytest --skip-auth

Note that I added the 60s wait time, because I thought the test suite was running before the container could stand up but it didn’t seem to help.

The output of the “docker ps” call is below. Note that the web service is not running:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d56c857f7316 postgres "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:5432->5432/tcp postgres

and the output of the test step is:

Error response from daemon: Container c9e106fab289d59c8fd233544cc7a642412f315241981bf3e00edba0a6a186eb is not running
##[error]Process completed with exit code 1.

This docker-compose file works great locally but something seems to be amiss when running on GitHub actions. Really appreciate any advice you have!

1 Like

Glad to hear you in GitHub Community.

Please try to run docker ps -a It will list all containers including exited ones. Based on your docker-compose.yml, it seems that backend container has exited immediately after building the image.

Use jakejarvis/wait-action@master action could not keep container running.

You can try to add next command at the end of your docker file.

CMD tail -f /dev/null

You can refer to this blog.

http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/

4 Likes

Hi Yanjingzhu,

Thanks for your help! It turns out the issue was actually with my volume command in the docker-compose file, which was looking for a volume that doesn’t exist on the GitHub Actions VM.

To debug, I used the command:

docker logs backend

Which pointed out that gunicorn was unable to find my django app:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
    worker.init_process()
  File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 129, in init_process
    self.load_wsgi()
  File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
    self.wsgi = self.app.wsgi()
  File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
    self.callable = self.load()
  File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
    return self.load_wsgiapp()
  File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
    return util.import_app(self.app_uri)
  File "/usr/local/lib/python3.7/site-packages/gunicorn/util.py", line 350, in import_app
    __import__ (module)
ModuleNotFoundError: No module named 'app_name'

I think that the volume I was calling was actually overwriting all my code, since there was nothing in the volume in the actions VM. I didn’t test this thouroughly, but I did get it to work by splitting my docker-compose file into three files: A base configuration, a local configuration, and a ci configuration

Base config

# docker-compose.yml
version: '3'

services:
  web:
    container_name: backend
    build: .
    ports:
      - "${PORT}:8000"
    depends_on:
      - postgres

  postgres:
    container_name: postgres
    image: postgres
    restart: always
    environment:
      - POSTGRES_PASSWORD=${PG_PASS}
    ports:
      - 5432:5432

Local dev config:

# docker-compose.override.yml
version: '3'

services:
  web:
    env_file:
      - .env

# This was the offending line
    volumes:
      - ~/app_name/:/code

  postgres:
      volumes:
      - pgdata:/var/lib/postgresql/data

volumes:
  pgdata:

CI config

# docker-compose.ci.yml
version: '3'

services:
  web:
    environment:
      - DATABASE_URL
      - DJANGO_SETTINGS_MODULE
      - SECRET_KEY
      - PORT

I then updated my github actions yml file:

name: Docker Image CI

on: [push]

jobs:

  build:

    runs-on: ubuntu-latest
    env:
      PG_PASS: ${{ secrets.PG_PASS }}
      DATABASE_URL: postgresql://postgres:${{ secrets.SECRET_PASSWORD }}@postgres/postgres
      DJANGO_SETTINGS_MODULE: app_name.settings.dev
      SECRET_KEY: ${{ secrets.SECRET_KEY }}
      PORT: 8000

    steps:
    - uses: actions/checkout@v1
    - name: Build the docker-compose stack
      run: docker-compose -f docker-compose.yml -f docker-compose.ci.yml up -d
    - name: Check running containers
      run: docker ps -a
    - name: Check logs
      run: docker logs backend
    - name: Run test suite
      run: docker exec backend pytest --skip-auth

The main change to the GitHub actions file is changing the docker-compose command to use the docker-compose.ci.yml file.

Hope this is helpful to future users! Looks like docker-compose works as expected, I just forgot that my local volume would not be available on the server.

16 Likes

Man, You saved my life!

The docker volumes thing gaves me headaches…

1 Like

Hi, I look at your issue and solution, I’m sure my case is very similar but I’m not working out a solution. The error is almost identical, yet…

I’m using act with nektos/act-environments-ubuntu:18.04 full 18GB docker image to debug.

Here is docker-compose.yml:

version: '3'
services:
   app:
      build:
         context: .
      image: my_linux_python
      command: sh -c "gunicorn -b 0.0.0.0:5000 --reload --workers=1 --threads=15 application:application"
      ports:
      - 5000:5000
      working_dir: /app
      volumes:
      - ./:/app
      env_file:
      - public.env
      #- private.env
      depends_on:
      - db
   db:
      image: postgres:12-alpine
      volumes:
      - db:/var/lib/postgresql/data
      - ./:/app
      - ./db/import_schema.sql:/docker-entrypoint-initdb.d/1_import_schema.sql
      - ./db/import_demo_data.sql:/docker-entrypoint-initdb.d/2_import_demo_data.sql
      env_file:
      - public.env
      working_dir: /app
volumes:
   db:

Just in case it helps, my Dockerfile, used to create my_linux_python image:

FROM debian:buster-slim

# set work directory
WORKDIR /app

# set environment variables, to avoid pyc files and flushing buffer
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

COPY ./requirements.txt /app/requirements.txt

RUN apt-get update \
    && apt-get install --no-install-recommends -y python3-pip=18.1-5 python3-pysam=0.15.2+ds-2 \
    && pip3 --no-cache-dir install --upgrade pip \
    && pip --no-cache-dir install setuptools==49.1.0 gunicorn==20.0.4 \
    && pip --no-cache-dir install -r requirements.txt \
    && apt-get autoremove -y && apt-get autoclean -y && apt-get clean -y \
    && rm -rf /var/lib/apt/lists/*

I’m trying to run my docker-compose up inside act but it fails with the same error you’ve seen.

[2020-08-10 13:21:25 +0000] [6] [INFO] Starting gunicorn 20.0.4
[2020-08-10 13:21:25 +0000] [6] [INFO] Listening at: http://0.0.0.0:5000 (6)
[2020-08-10 13:21:25 +0000] [6] [INFO] Using worker: threads
[2020-08-10 13:21:25 +0000] [9] [INFO] Booting worker with pid: 9
[2020-08-10 13:21:25 +0000] [9] [ERROR] Exception in worker process
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/gunicorn/arbiter.py", line 583, in spawn_worker
    worker.init_process()
  File "/usr/local/lib/python3.7/dist-packages/gunicorn/workers/gthread.py", line 92, in init_process
    super().init_process()
  File "/usr/local/lib/python3.7/dist-packages/gunicorn/workers/base.py", line 119, in init_process
    self.load_wsgi()
  File "/usr/local/lib/python3.7/dist-packages/gunicorn/workers/base.py", line 144, in load_wsgi
    self.wsgi = self.app.wsgi()
  File "/usr/local/lib/python3.7/dist-packages/gunicorn/app/base.py", line 67, in wsgi
    self.callable = self.load()
  File "/usr/local/lib/python3.7/dist-packages/gunicorn/app/wsgiapp.py", line 49, in load
    return self.load_wsgiapp()
  File "/usr/local/lib/python3.7/dist-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp
    return util.import_app(self.app_uri)
  File "/usr/local/lib/python3.7/dist-packages/gunicorn/util.py", line 358, in import_app
    mod = importlib.import_module(module)
  File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'application'
[2020-08-10 13:21:25 +0000] [9] [INFO] Worker exiting (pid: 9)
[2020-08-10 13:21:26 +0000] [6] [INFO] Shutting down: Master
[2020-08-10 13:21:26 +0000] [6] [INFO] Reason: Worker failed to boot.

My current python-app.yml, for debugging is:

name: Python application
on:
   push:
      branches:
      - prod-live
   pull_request:
      branches:
      - prod-live
jobs:
   build:
      runs-on: ubuntu-latest
      steps:
      -  uses: actions/checkout@v2
      -  name: Run docker-compose stack
         run: docker-compose -f docker-compose.yml up -d
      -  name: Check folder
         run: pwd
      -  name: Check files
         run: ls -ltr
# if docker container app were running then... 
      -  name: Lint with flake8
         run: docker-compose exec app flake8
      -  name: Check format with black
         run: docker-compose exec app black --diff --check .
      -  name: Test with PyTest
         run: docker-compose exec app python3 -m pytest 

Then, when I run act:

[Python application/build] 🚀  Start image=nektos/act-environments-ubuntu:18.04
[Python application/build]   🐳  docker run image=nektos/act-environments-ubuntu:18.04 entrypoint=["/usr/bin/tail" "-f" "/dev/null"] cmd=[]
[Python application/build]   🐳  docker cp src=/Users/alan/Programmes/phenopolis_api/. dst=/github/workspace
[Python application/build] ⭐  Run actions/checkout@v2
[Python application/build]   ✅  Success - actions/checkout@v2
[Python application/build] ⭐  Run Run docker-compose stack
Starting workspace_db_1 ... done
Starting workspace_app_1 ...
[Python application/build]   ✅  Success - Run docker-compose stack
Starting workspace_app_1 ... done
/github/workspace
[Python application/build]   ✅  Success - Check folder
[Python application/build] ⭐  Run Check files
| total 84
| -rw-r--r-- 1  502 dialout    84 Jun 25 12:46 Procfile
| -rw-r--r-- 1  502 dialout   470 Jul  5 16:06 env_vars.sh
| -rw-r--r-- 1  502 dialout  2306 Jul  8 21:22 README.md
| -rw-r--r-- 1  502 dialout 13206 Jul  8 21:26 code_setup.md
| -rw-r--r-- 1  502 dialout    31 Jul 27 09:49 pyproject.toml
| -rw-r--r-- 1  502 dialout   128 Jul 27 09:49 application.py
| -rw-r--r-- 1  502 dialout   105 Aug  3 21:50 public.env
| -rw-r--r-- 1  502 dialout   636 Aug 10 07:48 Dockerfile
| -rw-r--r-- 1  502 dialout   377 Aug 10 08:10 requirements.txt
| -rw-r--r-- 1  502 dialout   612 Aug 10 12:22 tox.ini
| -rw-r--r-- 1  502 dialout   735 Aug 10 12:52 docker-compose.yml
| -rw-r--r-- 1  502 dialout   739 Aug 10 12:59 docker-compose2.yml
| drwxr-xr-x 3 root root     4096 Aug 10 14:13 db
| drwxr-xr-x 2 root root     4096 Aug 10 14:13 response_templates
| drwxr-xr-x 4 root root     4096 Aug 10 14:13 schema
| drwxr-xr-x 2 root root     4096 Aug 10 14:13 scripts
| drwxr-xr-x 2 root root     4096 Aug 10 14:13 tests
| drwxr-xr-x 2 root root     4096 Aug 10 14:13 views
[Python application/build]   ✅  Success - Check files
[Python application/build] ⭐  Run Lint with flake8
| ERROR: No container found for app_1
[Python application/build]   ❌  Failure - Lint with flake8
Error: exit with `FAILURE`: 1

I understand that volume containing the app is not well setup, but how to change that? In your example docker-compose.ci.yml and docker-compose.yml don’t mention volumes, so how can it work?
Sorry if I missed something, I’m new to GitHup action as well.

1 Like

[Python application/build] :rocket: Start image=nektos/act-environments-ubuntu:18.04
[Python application/build] :whale: docker run image=nektos/act-environments-ubuntu:18.04 entrypoint=["/usr/bin/tail" “-f” “/dev/null”] cmd=
[Python application/build] :whale: docker cp src=/Users/alan/Programmes/phenopolis_api/. dst=/github/workspace
[Python application/build] :star: Run actions/checkout@v2
[Python application/build] :white_check_mark: Success - actions/checkout@v2
[Python application/build] :star: Run Run docker-compose stack
Starting workspace_db_1 … done
Starting workspace_app_1 …
[Python application/build] :white_check_mark: Success - Run docker-compose stack
Starting workspace_app_1 … done
/github/workspace
[Python application/build] :white_check_mark: Success - Check folder
[Python application/build] :star: Run Check files
| total 84
| -rw-r–r-- 1 502 dialout 84 Jun 25 12:46 Procfile
| -rw-r–r-- 1 502 dialout 470 Jul 5 16:06 env_vars.sh
| -rw-r–r-- 1 502 dialout 2306 Jul 8 21:22 README.md
| -rw-r–r-- 1 502 dialout 13206 Jul 8 21:26 code_setup.md
| -rw-r–r-- 1 502 dialout 31 Jul 27 09:49 pyproject.toml
| -rw-r–r-- 1 502 dialout 128 Jul 27 09:49 application.py
| -rw-r–r-- 1 502 dialout 105 Aug 3 21:50 public.env
| -rw-r–r-- 1 502 dialout 636 Aug 10 07:48 Dockerfile
| -rw-r–r-- 1 502 dialout 377 Aug 10 08:10 requirements.txt
| -rw-r–r-- 1 502 dialout 612 Aug 10 12:22 tox.ini
| -rw-r–r-- 1 502 dialout 735 Aug 10 12:52 docker-compose.yml
| -rw-r–r-- 1 502 dialout 739 Aug 10 12:59 docker-compose2.yml
| drwxr-xr-x 3 root root 4096 Aug 10 14:13 db
| drwxr-xr-x 2 root root 4096 Aug 10 14:13 response_templates
| drwxr-xr-x 4 root root 4096 Aug 10 14:13 schema
| drwxr-xr-x 2 root root 4096 Aug 10 14:13 scripts
| drwxr-xr-x 2 root root 4096 Aug 10 14:13 tests
| drwxr-xr-x 2 root root 4096 Aug 10 14:13 views
[Python application/build] :white_check_mark: Success - Check files
[Python application/build] :star: Run Lint with flake8
| ERROR: No container found for app_1
[Python application/build] :x: Failure - Lint with flake8
Error: exit with FAILURE: 1