Docker image created as non root, unable to access files when using docker compose with Error: EACCES: permission denied, open '/opt/key.pem'\n

I have this docker file

# Use the pre-baked fat node image only in the builder
# which includes build utils preinstalled (e.g. gcc, make, etc).
# This will result in faster and reliable One App docker image
# builds as we do not have to run apk installs for alpine.
FROM node:12 as builder
WORKDIR /opt/build
RUN npm install -g npm@6.12.1 --registry=https://registry.npmjs.org
COPY --chown=node:node ./ /opt/build
# npm ci does not run postinstall with root account
RUN NODE_ENV=development npm ci --build-from-source
# npm ci does not run postinstall with root account
# which is why there is a dev build
RUN NODE_ENV=development npm run build && \
    mkdir -p /opt/one-app/development && \
    chown node:node /opt/one-app/development && \
    cp -r /opt/build/. /opt/one-app/development
# prod build
RUN NODE_ENV=production npm run build && \
    NODE_ENV=production npm prune && \
    mkdir -p /opt/one-app/production && \
    chown node:node /opt/one-app/production && \
    mv /opt/build/LICENSE.txt /opt/one-app/production && \
    mv /opt/build/node_modules /opt/one-app/production && \
    mv /opt/build/package.json /opt/one-app/production && \
    mv /opt/build/lib /opt/one-app/production && \
    mv /opt/build/build /opt/one-app/production && \
    mv /opt/build/bundle.integrity.manifest.json /opt/one-app/production && \
    mv /opt/build/.build-meta.json /opt/one-app/production

# development image
# docker build . --target=development
FROM node:12-alpine as development
ENV NODE_ENV=development
# exposing these ports as they are default for all the local dev servers
# see src/server/config/env/runtime.js
EXPOSE 3000
EXPOSE 3001
EXPOSE 3002
EXPOSE 3005
WORKDIR /opt/one-app
RUN chown node:node /opt/one-app
CMD ["node", "lib/server"]
COPY --from=builder --chown=node:node /opt/one-app/development ./

# production image
# last so that it's the default image artifact
FROM node:12-alpine as production
ENV NODE_ENV=production
# exposing these ports as they are defaults for one app and the prom metrics server
# see src/server/config/env/runtime.js
EXPOSE 3000
EXPOSE 3005
WORKDIR /opt/one-app
CMD ["node", "lib/server"]
COPY --from=builder --chown=node:node /opt/one-app/production ./

and this is the docker-compose.yml file

version: '3'

networks:
  one-app-at-test-network:
services:
  one-app:
    build:
      context: ../
      args:
        - http_proxy
        - https_proxy
        - no_proxy
    # tags the built image as:
    image: one-app:at-test
    expose:
      - "8443"
    volumes:
      - ./one-app/one-app-cert.pem:/opt/cert.pem
      - ./one-app/one-app-privkey.pem:/opt/key.pem
      - ./nginx/nginx-cert.pem:/opt/nginx-cert.pem
      - ./extra-certs.pem:/opt/extra-certs.pem
    env_file:
      - ./one-app/base.env
    networks:
      one-app-at-test-network:
    depends_on:
      - "fast-api"
      - "slow-api"
      - "extra-slow-api"
      - "nginx"
    entrypoint: sh -c 'sleep 2s && node lib/server'
  fast-api:
    build:
      context: ./api
      args:
        - http_proxy
        - https_proxy
        - no_proxy
    ports:
      - "8000:80"
    networks:
      one-app-at-test-network:
        aliases:
          - fast.api.frank
  slow-api:
    build:
      context: ./api
      args:
        - http_proxy
        - https_proxy
        - no_proxy
    ports:
      - "8001:80"
    entrypoint:
      - "npm"
      - "start"
      - "--"
      - "3000"
    networks:
      one-app-at-test-network:
        aliases:
          - slow.api.frank
  extra-slow-api:
    build:
      context: ./api
      args:
        - http_proxy
        - https_proxy
        - no_proxy
    ports:
      - "8002:80"
    entrypoint:
      - "npm"
      - "start"
      - "--"
      - "8000"
    networks:
      one-app-at-test-network:
        aliases:
          - extra-slow.api.frank
  nginx:
    image: nginx:1.17.5-alpine
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
      - ./nginx/nginx-cert.pem:/etc/ssl/nginx-cert.pem
      - ./nginx/nginx-privkey.pem:/etc/ssl/nginx-privkey.pem
      - ./nginx/origin-statics:/usr/share/nginx/html
    networks:
      one-app-at-test-network:
        aliases:
          - sample-cdn.frank
  selenium-chrome:
    # specify docker image sha to ensure consistency
    image: selenium/standalone-chrome-debug@sha256:e8bf805eca673e6788fb50249b105be860d991ee0fa3696422b4cb92acb5c07a
    # https://github.com/SeleniumHQ/docker-selenium#running-the-images
    volumes:
      - /dev/shm:/dev/shm
    ports:
      - "4444:4444"
      - "5901:5900"
    networks:
      one-app-at-test-network:
    # sleep 5s to make sure set up is completed prior to opening up chrome
    entrypoint: bash -c '/opt/bin/entry_point.sh & sleep 5s && google-chrome --ignore-certificate-errors --no-first-run --autofill-server-url https://one-app:8443/success'

When i start a github action using docker compose i get the following error

Error: EACCES: permission denied, open '/opt/key.pem'\n

If i remove the user option on the docker file and allow it to be built with root as the user the actions works. How can i keep the USER: node and still be able to access the above files

Adjust the access rights on the file so the node user can read it. A few notes on that:

  • You’ll have to do the adjustment outside the container.
  • The UID the node user has is defined inside the container. It might belong to another username or none at all outside.
  • That file seems to be a cryptographic key, so DO NOT make it globally readable. The easiest thing is probably to adjust the file owner, but that’s a security risk if the UID is in use outside the container for unrelated purposes.

Thank you for responding and the suggestions @airtower-luna might you have a suggestion on how i can achieve this

You’ll have to do the adjustment outside the container.

In principle:

  1. Get the UID of the node user inside the container:
docker exec -ti --user node CONTAINER_ID id
  1. Use chown to make the one-app/one-app-privkey.pem file owned by that user.

You can use a similar approach to use group access rights instead (with chgrp and possibly chmod to enable read rights for the user.

But: If some other user on the system uses the same UID (or GID in case of group access) they will gain access to the file, which you probably don’t want. Alternatively you could also change the UID of the node user in the container to that if the current file owner, but that might create the reverse problem and let the container read more files than it should.

Also keep in mind that none of these solutions is portable, in case you want to run the same container image on another host. For that it’d be better to use a named volume to store the certificate and key, so you can adjust access rights in the container scope only.

1 Like

Thank you so much for the suggestions.

I eventually went ahead to use ARG and ENV variables and passing them to the docker image depending on what i was using it for
User root for integrations tests
User node for building the image to be distributed