Tensorflow/nvidia/cuda docker mismatched versions

  cuda, docker, nvidia, tensorflow

I am trying to use tensorflow and nvidia with docker, but hitting the following error:

docker run –runtime=nvidia -it –rm tensorflow/tensorflow:latest-gpu python -c “import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.random_normal([1000, 1000])))”

docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused “process_linux.go:424: container init caused “process_linux.go:407: running prestart hook 1 caused “error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli –load-kmods configure –[email protected]/sbin/ldconfig.real –device=all –compute –utility –require=cuda>=10.0 brand=tesla,driver>=384,driver<385 –pid=5393 /var/lib/docker/overlay2/……./merged]nnvidia-container-cli: requirement error: unsatisfied condition: brand = teslan”””: unknown.

I get similar error when trying to run nvidia-smi:

docker run –runtime=nvidia –rm nvidia/cuda nvidia-smi

but when trying to run nvidia-smi with cuda:9.0-base, it works like a charm:

docker run –runtime=nvidia –rm nvidia/cuda:9.0-base nvidia-smi

Do I need to make sure that cuda 10 works or I can run tensorflow with cuda 9? And how can I run the docker image of tensorflow with cuda:9.0-base? (still a docker newby).

Thanks a lot!

Source: StackOverflow

LEAVE A COMMENT