Is it possible to select a specific docker image based on CUDA drivers installed in the OS?

  docker, dockerfile, gpu, pytorch

Suppose the following scenario:

I’ve trained an image classifier for a specific task. Then I’ve built a simple web app with this model embedded and plan to send it to a client in as a docker application that should make use of GPUs.

To do this I’ve written a Dockerfile that starts from a base image, like:
FROM pytorch/pytorch:?.?.??-cuda??.?-cudnn?-devel

Questions

The main question is how can I set the correct image?

I mean, to use the GPUs there is a correct correspondence between PyTorch-CUDA drivers-cuDNN lib. But how can I pick the correct one without having any clue about the client environment where the application will be deployed?

Or is there a way to select the right image on docker build time?

Or, finally, any image should work since the CUDA drivers from the docker container do not relate with the OS drivers? (I’m not sure about this)

Source: Docker Questions

LEAVE A COMMENT