Hi there! I was playing with GitHub Actions (love them so far) and came to a problem.
I'm trying to setup E2E testing for a service using testcafe. Inside my job I first start the service locally on port 3000 (should be accessible on http://localhost:3000/). After that I run my testcafe-action (Docker action with all the browsers pre-installed) which targets http://localhost:3000/.
As expected I get the following error which means that nothing is running on that address:
Server is running on http://localhost:3000 of the DOCKER_HOST, which is probably not exposed to child containers. Is there any way to get a HOSTNAME of the DOCKER_HOST on which the service is running so that correct address can be targeted in such cases?
I've looked into services, but unfortunately they do not support local Dockerfiles (it is posible to target service). Is there some easier way to achieve what I want?
Docker on Linux does not automatically map the host networking into the container. According to this stackoverflow post You can access host ports through the default route out of the container. Here is a commit that at least makes things work but it is not ideal for a reusable action.
Thank you for the reply. This does work, but as you said this doesn't really allow for a reusable action. Is there a way to get "hostip" in the workflow and pass it to the action as BASE_URL? If this is currently not possible, it would be nice if you had this use case in mind. I've been using such workflows in Gitlab and would love to see them supported inside Github Actions.
It GitLab are you able to do this when you start a process on the VM and then run a continer? Or are you running everything inside containers?
Actually the flow I'm using in Gitlab is a bit different than what I am trying to do inside Github Actions. That is because I'm trying to use reusable testcafe-action here, because I really like the concept of reusable actions.
I think there should be some support to map the host networking into subcontainers. For example createing a network called "dind" where inside nested docker containers we could access that under http://dind:3000 or something similar.
In Gitlab im doing something like this:
test:e2e: stage: test image: circleci/node:10.14-browsers services: - name: docker:dind alias: dind variables: DOCKER_HOST: 'tcp://docker:2375' DOCKER_DRIVER: overlay2 before_script: - docker-compose -H $DOCKER_HOST -f monitoring-service/docker-compose.yml up -d service # start monitoring-service in background - BROWSER=none yarn start 2>&1 > /dev/null & # Start React app in background - node_modules/wait-on/bin/wait-on tcp:dind:1337 --timeout 420000 # Wait for monitoring-service to start - node_modules/wait-on/bin/wait-on http-get://localhost:3000 --timeout 420000 # wait for React app to start script: - yarn test:e2e:headless
The flow is a bit different, but it lets me map the networking for the subcontainers. I think I could get the same flow to work in Github Actions (not sure), but I dont like the part where we I was forced to set the image to
in Gitlab instead of being able to use anything. I liked that the Github Actions approach with testcafe-action where I could abstract that away from the user and just let them use any docker image as a base for their job.
P.S. I'm not an expert on Docker networking so I might have been using some terms wrongly.