Unable to resolve host of job service in actions

Hey, was attempting to setup a dind service for using ory/dockertest in code, and unable to connect to the dind container. I proceeded to make a simple job to test it out:

jobs:
  test:
    runs-on: ubuntu-latest

    container: ubuntu

    services:
      echo-server:
        image: hashicorp/http-echo

    steps:
      - name: Test
        run: apt update && apt install -y curl && curl echo-server:5678

This results in curl reporting “Could not resolve host”, I’m wondering what could be happening here, it’s quite similar to the documentation here about using a redis service in a job, bar the options for healthcheck.

Thanks!

The difference to the example is that the example is using a job container. That job container is another Docker container on the same Docker network as the service containers, so Docker makes name resolution work. Your process running on the host doesn’t get that.

Options that come to mind, depending on what you need:

  • Add a port mapping to the service, so you can access it on localhost.
  • Retrieve the IP address via docker inspect, add an /etc/hosts entry with the hostname you want.
  • Retrieve the IP address via docker inspect, use it directly to connect.

I had my job running on container: ubuntu no? :thinking:

Right, I missed that somehow, sorry. Not sure how that happened. :sweat_smile:

Hm, in that case it should work in theory. Is there any chance you could share a link to the actual job logs?

feat: add test ci · 2785/warframe-assistant@804b003 · GitHub ← the job log, I ran a couple on local machine with act, same deal :thinking:

Thanks a ton for helping, by the way!

1 Like

This is really confusing, I can’t see anything in the logs that says why it shouldn’t work. :thinking:

What I’d do from here is to check everything that might influence the name resolution: /etc/resolv.conf, /etc/hosts, see what dig echo-server says, or if ping works. I hope that help determine at least the area in which the problem is.

Alright, I’ll give that a try :slight_smile:

Have some lead for what might be happening. I think github is not connecting the job container to the actual user defined bridge. When I cat the /etc/resolv.conf in the action itself I get back the same resolv.conf as my local machine, meanwhile the resolv.conf would’ve been overridden with some different name server if it were connected to docker network.

Spun up couple containers on local machine and tested this out, cannot resolve / ping / whatever when they are just on the default bridge.

That looks like a problem, yes. In the workflow logs the --network option for both containers selects the same network, though. :thinking:

Normally Docker should set /etc/resolv.conf in a container connected to a custom network (not the default bridge) to something like

nameserver 127.0.0.11
options ndots:0

so Docker can handle the requests.

Suspecting that the job container isn’t getting connected to the docker network, which is rather weird…