See e.g. https://github.com/datalad/datalad-extensions/runs/698281348 where it just says that the last step (Run datalad tests) has failed and took over 58m to complete. My guess it stalled and may be flooded outputs at some point. No longs available online or in the download bundle (present for other matrix runs).
When the runner machine runs out of disk, the runner will crash. I am not sure whether this is the cause of your scenario, I am trying to involve some senior engineers to investigate your issue. You patience will be appreciated.
Thank you in advance.
FWIW, there is an idea that it might be somehow not disk space but memory exhaustion since we now are observing stalling, in a different testing setup but with comparable state of our and external tools, which is accompanied with
packages/datalad/cmd.py", line 15, in <module> import subprocess File "/opt/python/3.7.1/lib/python3.7/subprocess.py", line 152, in <module> import _posixsubprocess ImportError: /home/travis/virtualenv/python3.7.1/lib/python3.7/lib-dynload/_posixsubprocess.cpython-37m-x86_64-linux-gnu.so: failed to map segment from shared object /home/travis/virtualenv/python3.7.1/bin/python: error while loading shared libraries: libpython3.7m.so.1.0: cannot create shared object descriptor: Cannot allocate memory FAIL
But we do get full logs on travis CI: https://travis-ci.org/github/datalad/datalad/builds/700119573