docker: torch.cuda.is_available() returns False despite nvidia-smi works

  cuda, docker, nvidia-docker, pytorch

I have two servers, with environments

  • one NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 and
  • another NVIDIA-SMI 455.45.01 Driver Version: 455.45.01 CUDA Version: 11.1

I installed docker, nvidia-docker and deepo (GPU version). Now let’s run a docker container shell:
docker run -it --gpus all deepo bash. In the shell,

  • nvidia-smi returns the correct info,
  • but PyTorch fails to detect the GPU:
>>> import torch
>>> torch.cuda.is_available()
False
>>> torch.backends.cudnn.enabled
True

The same problem happens to both of my servers. Could anyone help solve this?

Source: Docker Questions

LEAVE A COMMENT