We have a series of integration tests that run using the GitHub Actions runner and use the built in kubernetes environment to bring up a small cluster and do the appropriate tests. I’ve written a small bash script that runs in case the integration tests fail and queries all nodes in the cluster for their state (including the logs from kubenetes), places them all in one directory, then uses the builtin aws utility to push all of the artifacts to an S3 compatible bucket we have on DigitalOcean.
The step in Github Actions looks like:
- name: Debug workflow if failed if: failure() run: | export BUCKET_NAME=*** export AWS_ACCESS_KEY_ID=*** export AWS_SECRET_ACCESS_KEY=*** ... curl -s https://gist.githubusercontent.com/acud/2c219531e832aafbab51feffe5b5e91f/raw/cd1f12e73f0b0660a376f72e47135c7c966a5998/beekeeper_artifacts.sh | bash ... Tunshell stuff goes here...
The gist shown does all the heavy lifting, and creates the tarball then uploads it using the aws s3 cp command, however the command fails with a broken pipe when bring run directly through the failed action trigger:
tr: write error: Broken pipe tr: write error
It isn’t really clear why this is happening. Also, if I ssh into the node using tunshell and execute the exact same script manually it doesn’t fail. Any ideas?