Imagine, you have have a custom third-party Docker container action implemented with Java. This actions uses
JAVA_HOME environment variable to run it’s logic. Think about Gradle application plugin which creates and executable Java application that relies on that environment variable. Imagine this Docker container action is based on
This action is really simple, it prints the environment and JDK location:
#!/usr/bin/env sh echo "Environment:" printenv printf "\n" echo "JAVA_HOME:" printenv JAVA_HOME printf "\n" echo "Java location:" command -v java printf "\n" echo "Java version:" java --version
If you just run this custom action, it would work:
- uses: madhead/actions-env-leak@main
The output would be:
JAVA_HOME: /usr/java/openjdk-17 Java location: /usr/java/openjdk-17/bin/java
Everything is fine.
Now, imagine your workflow also needs Java, but a different version:
- uses: actions/setup-java@v2 with: distribution: 'adopt' java-version: '11'
Now, if you run the custom action again, it is screwed up:
JAVA_HOME: /opt/hostedtoolcache/Java_Adopt_jdk/11.0.11-9/x64 Java location: /usr/java/openjdk-17/bin/java
Notice that JAVA_HOME now points to a wrong location! This variable just leaked from the workflow! Now, if the custom action relies on it to run the Java application, it would fail, because this path is not valid inside the container!
This is what actually happened with one of my actions. The issue is discussed here.
I have created a minimal reproducible example here.
Why do the variables leak? Why is this behaviour implicit? Shouldn’t it be explicit? The workflow author may want to override some of the variables, but if should not be the default, shouldn’t it? Can action or workflow authors prevent this leakage from happening right now?