Nvidia driver 410.57
error occurs after: $ sudo docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused "process_linux.go:413: running prestart hook 1 caused \"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=10.1 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=396,driver<397 brand=tesla,driver>=410,driver<411 --pid=13416 /var/lib/docker/overlay2/97549cacbd12a4779f071ecb38487cd52c90c6fcbf99782762e194ac84b273e7/merged]\\\\nnvidia-container-cli: requirement error: unsatisfied condition: brand = tesla\\\\n\""": unknown.
I've been using this tutorial: https://firstname.lastname@example.org/docker-...4-cb80f17cac65
I am unsure of how to fix this error. I've (unsuccessfully) tried system restart docker. Thanks!
In this case there is a driver installation occurring. It is not clear here if a superior version of drivers is needed.
Also, there are situations where the new way to install a docker supporting GPU (replacing the process requiring docker2) is misinterpreted, causing similar errors. From the instructions, we can erroneoulsy think that we don't need anymore to install NVIDIA Drivers and CUDA packages when using docker. This is not true for the Drivers, which still need to be installed (we can possibly skip the CUDA package if we are not developing on CUDA), but without drivers, we generate a lot of errors of this type. The behaviour may look erratic, because in some machines and distributions we may have the drivers installed, while in other fresh machines the CUDA drivers are missing.