Re: Github Action connection failure between multiple docker containers

I’m using a services directive to start a MySQL, and I’ve even been added a healthcheck to make sure the server is up and running when I try to connect to it.

But simply because I’m spinning another docker container, this container cannot connect to it, even if I pass --network=host to it.

Any thoughts?

test:
    runs-on: ubuntu-latest
    needs: build
    steps:
    - name: Login to ECR
      run: $(aws ecr get-login --region us-east-1 --no-include-email)
      env:
        AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
        AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
    - name: Test
      run: docker run --network-host -t -e CODECOV_TOKEN $ECR_ADDRESS:dev-$GITHUB_SHA make test
      env:
        ECR_ADDRESS: ${{ secrets.ECR_ADDRESS }}
        CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
    services:
      tests-mysql:
        image: mysql:5.7
        env:
          MYSQL_ROOT_PASSWORD: toor
        options: --health-cmd="mysqladmin ping -h tests-mysql --silent"

Hey,

If you have figured it out already, two things:

  1. you have put --network-host in the config you’ve pasted; I think it should read --network=“host”

  2. in host network, the other containers are accessible as localhost, so your ‘Test’ container should use that instead of tests-mysql. Also, the port the service binds to is not the one you define, you have to use 

    env:
    PORT: ${{ job.services.tests-mysql.ports[‘XXXX’] }}

with XXXX being the port you have defined (or the default one) for the service.

I think all of that is due to the fact that all services are started on a randomly named network (you can see it in the logs of the containers startups), so any subsequent containers run manually should connect to it as well, except we don’t know its name.

Hope that helps.

You can get the container network from the job context (https://help.github.com/en/articles/contexts-and-expression-syntax-for-github-actions#job-context

For example, you would want to run docker run --network "${{ job.services.network }}" ... to join it to the pre-existing network where it can reach your service 

Youll probably also need to plumb through the hostname of the mysql service (test-mysql) so that the code running in the “$ECR_ADDRESS:dev-$GITHUB_SHA” can find it.  Something like docker run -e MYSQL_HOST=test-mysql ... 

To use host networking, you would need to provide that option to the service as well, but I dont recommend it. Using the docker network is a nicer level of isolation and you wont conflict with ports on the host, need to elevate to bind to low range ports, etc

@dakale wrote:

You can get the container network from the job context (https://help.github.com/en/articles/contexts-and-expression-syntax-for-github-actions#job-context

Thanks, I’ve missed that.

To use host networking, you would need to provide that option to the service as well, but I dont recommend it. Using the docker network is a nicer level of isolation and you wont conflict with ports on the host, need to elevate to bind to low range ports, etc

While you’re right for the isolation, I did not have to give the option to the services for it to work (same context as OP, one DB service and one docker run). It seems logical because if I’m not mistaken, whatever the network, everything is reachable on localhost from the host by default and the --network=host option make the container act as the host network-wise, right?

That seems surprising to me. Did you publish ports on the service container (options: -p 80:8080, or ports: [80:8080], for eg)? 

While it is true that you can technically reach a container from the host over the default docker0 interface (if no --network option was given), doing something like:

docker run --rm -d nginx
docker inspect $container_id | grep IPAddr
curl $container_id_addr

, I doubt that is what you meant. In that case, port 80 is not bound on localhost, and we cannot simply curl localhost:80. However, if you did publish a port, it would be bound on localhost and thus reachable from the host, or a container that used host networking:

$ docker run --rm -d -p 8080:80 nginx

$ docker run --rm -it --network=host buildpack-deps:bionic-curl

root@ubuntu:/# curl localhost:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

Hopefully that clarifies some things, Im not sure exactly what might be going on without seeing your workflow file or a sample reproducing it

Yes, I indeed published it, I’m definitely in the second example situation. It did not occur to me not to publish the port though, I’ll give it a second thought.

Anyway, I was just answering OP and giving him a way to make his stuff work but indeed your solution is far better and I’ll employ it as well, thanks for that.

Just to confirm for posterity (and bookmarking), the preferred way is to --network "${{ job.services.network }} the docker-run container and not publish the port in the services, right?

Yea, if you want to run containers as part of the job that is what I’d recommend. 

As an aside, depending on the use-case, you may also want to use a job container (https://help.github.com/en/articles/workflow-syntax-for-github-actions#jobsjob_idcontainer) This will pull the base image and execute the steps inside that container. That way you just have to update the steps in the workflow file, instead of needing to build and push a new container image with the scripts inside whenever you update your build steps. The other benefit is that the job container will automatically be run with the correct --network so it keeps your script simpler. If you need more complex logic to dynamically determine the name or tag of the container to run, run steps to authenticate with a private registry first before pulling an image, etc, than the example from the original post is probably better for now. 

1 Like