Building custom docker container image on top of tensorflow GPU docker image

  dockerfile, nvidia-docker, python-3.x, tensorflow

I am trying to build a custom docker image to server our image classification model.

Using Ubuntu 18.04 on Google cloud. GPU model Nvidia-t4. On the same machine, using Tensorflow – GPU 1.9.0 and its working as expected. When I build the docker file with the command:

sudo nvidia-docker build -t name .

Seeing the following error message. Model is loaded on CPU instead of GPU and inference in run on CPU.

2021-01-05 20:46:59.617414: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2021-01-05 20:46:59.618426: E tensorflow/stream_executor/cuda/cuda_driver.cc:397] failed call to cuInit: CUresult(-1)
2021-01-05 20:46:59.618499: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:152] no NVIDIA GPU device is present: /dev/nvidia0 does not exist

Docker File:

FROM tensorflow/tensorflow:1.9.0-gpu-py3 as base
ENV LD_LIBRARY_PATH /usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/lib64:$LD_LIBRARY_PATH
ADD . /app
WORKDIR /app
RUN apt-get -yqq update
RUN apt-get install -yqq libsm6 libxext6 libxrender-dev
RUN pip install -r requirements.txt
RUN python3 model_inference.py

Do I need to add anything more in my docker file?

Source: Dockerfile Questions

LEAVE A COMMENT