Docker make Nvidia GPUs visible during docker build process


I want to build a docker image where I want to compile custom kernels with pytorch. Therefore I need access to the available gpus in order to compile the custom kernels during docker build process. On the host machine everything is setted up including nvidia-container-runtime, nvidia-docker, Nvidia-Drivers, Cuda etc. The following command shows docker runtime information on the host system:

$ docker info|grep -i runtime
 Runtimes: nvidia runc
 Default Runtime: runc

As you can see the default runtime of docker in my case is runc. I think changing the default runtime from runc to nvidia would solve this problem, as noted here.

The proposed solution doesn’t work in my case because:

  • I have no permissions to change the default runtime on system I use
  • I have no permissions to make changes to the daemon.json file

Is there a way to get access to the gpus during the build process in the Dockerfile in order to compile custom pytorch kernels for CPU and GPU (in my case DCNv2)?

Here is the minimal example of my Dockerfile to reproduce this problem. In this image, DCNv2 is only compiled for CPU and not for GPU.

FROM nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04

RUN apt-get update && 
    DEBIAN_FRONTEND=noninteractive apt-get install -y tzdata && 
    apt-get install -y --no-install-recommends software-properties-common && 
    add-apt-repository ppa:deadsnakes/ppa && 
    apt update && 
    apt install -y --no-install-recommends python3.6 && 
    apt-get install -y --no-install-recommends 

RUN ln -s /usr/bin/python3 /usr/bin/python & 
    ln -s /usr/bin/pip3 /usr/bin/pip

RUN python -m pip install --no-cache-dir --upgrade pip setuptools && 
    python -m pip install --no-cache-dir torch==1.4.0 torchvision==0.5.0

RUN git clone

#Compile DCNv2
RUN bash ./

# clean up
RUN apt-get clean && 
    rm -rf /var/lib/apt/lists/*

#Build: docker build -t my_image .
#Run: docker run -it my_image

An not opitmal solution which worked would be be the following:

  • Comment out line RUN bash ./ in Dockerfile
  • Build image: docker build -t my_image .
  • Run image in interactive mode: docker run --gpus all -it my_image
  • Compile DCNv2 manually: [email protected]:/DCNv2# ./

Here DCNv2 is compiled for CPU and GPU, but that seems to me not an ideal solution, because I must compile DCNv2 every time when i start the container.

Source: Docker Questions

Categorised as docker, nvidia-docker, python-3.x, pytorch Tagged , , ,


Leave a Reply

Still Have Questions?

Our dedicated development team is here for you!

We can help you find answers to your question for as low as 5$.

Contact Us