I’ve been using tensorflow docker images to run Tensorflow with GPU which works fine. For example, I just write this command with the –gpus all flag. docker run -it –rm –gpus all -v $PWD:/tf/notebooks -p 8888:8888 tensorflow/tensorflow:2.2.2-gpu-py3-jupyter I would like to use docker-compose instead and was trying to follow the steps from Docker’s enabling GPU ..
I am trying to use the docker image with new drivers >460 for CUDA 11 but my host machine has cuda-10. According to the image below (https://github.com/NVIDIA/nvidia-docker) docker uses the drivers present in the host hence I am getting error as kernel mismatch required 450+ kernel version 418. I have installed the drivers on my ..
I’m working with my team on a machine with 4 GPUs, we want to use Docker containers, is there a possibility to update a container GPUs for example: container #1 has 2 GPUs and container #2 has 2 GPUs, now we want to remove 1 GPU from container #1 and add it to container #2 ..
i’m using docker on windows 10. The objective is to run a ML code in linux (ubuntu) in a docker container. But i need to use a GPU and i don’t know if i can do it and how. I read this post Using GPU inside docker container – CUDA Version: N/A and torch.cuda.is_available returns ..
I want to ask a semi-theoretical question. I’m using a Docker image which utilizing Nvidia-container-runtime to communicate with the GPU on my machine. The Docker image purpose is to run an application which involves presentation of images (photos) at high rate (1 Hz – 10 Hz) (Gui application). However, as we noticed, there are some ..
Exception message: Launch container failed Shell error output: image: nvidia/cuda:10.2-devel-ubuntu18.04 is not trusted. Disable mount volume for untrusted image image: nvidia/cuda:10.2-devel-ubuntu18.04 is not trusted. Disable cap-add for untrusted image Docker capability disabled for untrusted image image: nvidia/cuda:10.2-devel-ubuntu18.04 is not trusted. Disable runtime for untrusted image Could not find requested runtime in allowed runtimes Error constructing ..
I am using docker container to run my experiment. I have multiple GPUs available and I want to use all of them for my experiment. I mean I want to utilize all GPUs for one program. To do so, I used tf.distribute.MirroredStrategy that suggested on tensorflow site, but it is not working. here is the ..
I want to run jobs into an ECS Cluster using GPU reources. I create an cluster an a task definition. I have followed https://docs.aws.amazon.com/es_es/AmazonECS/latest/developerguide/ecs-gpu.html When I run a job I get… Wed Feb 24 07:25:21 2021 +—————————————————————————–+ | NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 | |——————————-+———————-+———————-+ | GPU Name Persistence-M| Bus-Id Disp.A | ..
I have docker-compose.yml file with configuration of my application. One of the services needs access to the GPU. I built the container separately and using the command docker run -it –gpus=all <my_image> /bin/bash quite simply started everything. But I need to start this service with GPU access using docker-compose. Version of my docker-compose.yml – 3.7 ..
Description of the issue Context information (for bug reports) Output of docker-compose version docker-compose version 1.17.1, build unknown docker-py version: 2.5.1 CPython version: 2.7.17 OpenSSL version: OpenSSL 1.1.1 11 Sep 2018 Output of docker version Client: Version: 19.03.6 API version: 1.40 Go version: go1.12.17 Git commit: 369ce74a3c Built: Fri Dec 18 12:21:44 2020 OS/Arch: linux/amd64 ..