Docker Compose and CUDA: CUDA Version N/A in docker container


The problem abstracted is that I use docker-compose to build and up 5 containers. The main container is called web_interface, while the others control services such as nginx, redis, mongodb, etc.

I would like to use pytorch in a part of my application. To do so, I check whether torch.cuda.is_available() returns True. On the host CUDA server, the output is as follows (excluding the two GPUs and the processes currently running):

| NVIDIA-SMI 460.91.03    Driver Version: 460.91.03    CUDA Version: 11.2     |

When I use docker-compose up after following the guidelines of GPU support with docker compose stated here, I get the following output of nvidia-smi after using bash in the container to get the output:

| NVIDIA-SMI 460.91.03    Driver Version: 460.91.03    CUDA Version: N/A      |

Additionally, in the docker container, torch.cuda.is_available() returns False compared to the host machine, which returns True.

My docker-compose file for the container that uses nvidia is as follows:

    build: .
    restart: always
            - driver: nvidia
              count: 2
              capabilities: [nvidia-compute]
    command: bash -c "python makemigrations && python migrate && gunicorn djangoProject.wsgi:application --preload --bind --timeout 600"
    container_name: web_interface
      - postgres
      - "8042"
    - ./:/code

How to solve this N/A issue regarding the CUDA version?

I have checked the following relevant links:

Using GPU from a docker container

Using GPU inside docker container – CUDA Version: N/A and torch.cuda.is_available returns False

…alongside some more, but to no avail.

Source: Docker Questions

Categorised as cuda, docker, docker-compose, nvidia-docker, pytorch Tagged , , , ,


Leave a Reply

Still Have Questions?

Our dedicated development team is here for you!

We can help you find answers to your question for as low as 5$.

Contact Us