Service container: use the image built in the job

Hello everyone,

I’m new to Github actions and I’m experimenting with Docker Container Action and Service containers

In brief, my workflow looks like this:

  1. Build 2 different container images with required software packages
  2. Run these container as 2 services
  3. Run tests, which accesses both these service containers

As per the official documentation, these images are pulled from docker hub.

So, my questions are:

  1. Does GH actions support running service containers using the image created from the current (or previous) job, that runs as part of the commit/PR?
  2. If #1 is not feasible, the fallback approach I’m thinking of is to upload the image to quay.io registry, rather than dockerhub. Can GH action upload container images to quay.io?
  3. Can service containers use image from quay.io?

Thanks,

Dinesh

v2.0yyd57ah8ihbek6u5mdi

  1. Does GH actions support running service containers using the image created from the current (or previous) job, that runs as part of the commit/PR?
  2. If #1 is not feasible, the fallback approach I’m thinking of is to upload the image to quay.io registry, rather than dockerhub. Can GH action upload container images to quay.io?
  3. Can service containers use image from quay.io?
  1. The answer is no. 

  2. You could publish your image to quay.io during GitHub workflow execution. You could run docker commands to login to quay.io and then upload image to it. 

  3. If your quay.io registry is a publlic one, it could be used in service container image key. If it is a private one, there is no way to provide credentials for the registry in service container section .  So it could not pull the image from quay.io

1 Like

Thanks for the response.

I can keep my quay repos public.

Do you or the concerned team have plans to implement #1? Or do you have feature request to host a temporary container registry within GH actions in your roadmap?

Also, I have a question unrelated to the original thread. Since the limitation provided by shared runners “might” not suffice my requirements to run multiple service containers, I was planning to use self-hosted runners. However, I was very worried about the warning provided. Do you have any plans to address these security concerns in your roadmap? My project is opensource and is prone for PRs from the community. If this question needs to be in a new thread, I can start one too :slight_smile:

–Dinesh

Friendly followup, are there plans to implement #1?

I’m facing the same issue except my images are private. For context, I’m running tests per each PR, this requires building images derived from the PR.

How about running the images using Docker commands within the same job? I have a job that builds and runs container images for tests using Docker Compose.

1 Like

That’s a good suggestion and I have tried it. But, I need to be able to access each container’s feeds via http://localhost:port. And some of my images are on Amazon ECR, so I don’t think I can pull them with service-containers jobs (thus why I build them)

Unfortunately, the docs states that I have to use service-containers to access localhost.

I don’t see it say that anywhere. You should be able to use --publish as usual when starting containers in your workflow.

You can use credentials when defining service containers, or docker login like on the command line, if authentication is the issue with Amazon ECR.

Sorry, my bad. It’s not in the docs, but I experienced it when I docker-compose w/o a service-container, and got this issue

Error: connect ECONNREFUSED 127.0.0.1:8088
      at TCPConnectWrap.afterConnect [as oncomplete]

I can try Amazon ECR. My last resort is pulling the image created per the PR branch repo.

For better context, let’s say there’s four services: A, B, C and D.

They depends-on each other like below if in docker-compose.yml.

A -> B -> C -> D.

But, I’ll be committing a PR for C’s repo.

Each PR will trigger a GA Workflow that includes building C 's image with the others somehow per service-containers method.

If I don’t, when I try accessing C 's localhost url, I’ll get the below error (thread).

Error: connect ECONNREFUSED 127.0.0.1:8088
      at TCPConnectWrap.afterConnect [as oncomplete]

It’s hard to say what should or shouldn’t work without seeing your container setup, I’m afraid. One thing I can say is that Actions jobs set up their own network for services and job container (if any), so mixing those with Docker Compose or docker run is going to be a bit tricky. The job context contains some information on that network.

1 Like

Thank you for your help so far Luna. I’ll look into your resource.

If this provides more context, I added a template of my docker-compose.yml below. Each of them except the db images have a repo. They depend on each other as follows

C -> B + A -> db

I start my GA workflow like below:

  1. Start a PR for C and it triggers the GA Workflow
  2. Checkout each of the above repos (except db, that’s public)
  3. Build each of their images
  4. docker-compose up
  5. Checkout my tests repo and run them against the C’s localhost url (http://localhost:3333).

I keep getting the EconnRefused error. But, I want to get the feeds from C via its url. My team wants to avoid pulling the repos’ images, just build them.

version: "3"

services:
  db:
    image: mongo:latest
    container_name: db
  db-seed:
    image: mongo:latest
    container_name: db_seed
    links:
      - db
    volumes_from:
      - db
    command:
      /import.sh
  A:
    image: ecr-aws/A
    container_name: api
    ports:
        - 1111:1111
    env_file:
      - 'A/.env'
    depends_on: 
      - db
  B:
    image: ecr-aws/B
    container_name: B
    ports:
      - 2222:2222
    env_file:
      - 'B/.env'
    depends_on: 
      - db
  C:
    image: ecr-aws/C
    container_name: C
    ports:
      - 3333:3333
    env_file:
      - 'C/.env'
    depends_on:
      - A

From the compose file connecting to localhost:3333 should work, assuming the service doesn’t fail to start during some other issue. I think any further analysis would need at least the workflow (to see if there’s any bug there), and ideally logs.