I’m working with my team on a machine with 4 GPUs, we want to use Docker containers, is there a possibility to update a container GPUs for example: container #1 has 2 GPUs and container #2 has 2 GPUs, now we want to remove 1 GPU from container #1 and add it to container #2 ..
i’m using docker on windows 10. The objective is to run a ML code in linux (ubuntu) in a docker container. But i need to use a GPU and i don’t know if i can do it and how. I read this post Using GPU inside docker container – CUDA Version: N/A and torch.cuda.is_available returns ..
I want to ask a semi-theoretical question. I’m using a Docker image which utilizing Nvidia-container-runtime to communicate with the GPU on my machine. The Docker image purpose is to run an application which involves presentation of images (photos) at high rate (1 Hz – 10 Hz) (Gui application). However, as we noticed, there are some ..
Exception message: Launch container failed Shell error output: image: nvidia/cuda:10.2-devel-ubuntu18.04 is not trusted. Disable mount volume for untrusted image image: nvidia/cuda:10.2-devel-ubuntu18.04 is not trusted. Disable cap-add for untrusted image Docker capability disabled for untrusted image image: nvidia/cuda:10.2-devel-ubuntu18.04 is not trusted. Disable runtime for untrusted image Could not find requested runtime in allowed runtimes Error constructing ..
I am using docker container to run my experiment. I have multiple GPUs available and I want to use all of them for my experiment. I mean I want to utilize all GPUs for one program. To do so, I used tf.distribute.MirroredStrategy that suggested on tensorflow site, but it is not working. here is the ..
I want to run jobs into an ECS Cluster using GPU reources. I create an cluster an a task definition. I have followed https://docs.aws.amazon.com/es_es/AmazonECS/latest/developerguide/ecs-gpu.html When I run a job I get… Wed Feb 24 07:25:21 2021 +—————————————————————————–+ | NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 | |——————————-+———————-+———————-+ | GPU Name Persistence-M| Bus-Id Disp.A | ..
I have docker-compose.yml file with configuration of my application. One of the services needs access to the GPU. I built the container separately and using the command docker run -it –gpus=all <my_image> /bin/bash quite simply started everything. But I need to start this service with GPU access using docker-compose. Version of my docker-compose.yml – 3.7 ..
Description of the issue Context information (for bug reports) Output of docker-compose version docker-compose version 1.17.1, build unknown docker-py version: 2.5.1 CPython version: 2.7.17 OpenSSL version: OpenSSL 1.1.1 11 Sep 2018 Output of docker version Client: Version: 19.03.6 API version: 1.40 Go version: go1.12.17 Git commit: 369ce74a3c Built: Fri Dec 18 12:21:44 2020 OS/Arch: linux/amd64 ..
From what I understand using docker for Windows and creating tensorflow containers with GPU is not supported. It is if you use Windows Subsystem for Linux (WSL) but the GPU support on WSL is not available in the Retail Build of Windows 10 so you would need to get the latest Windows 10 Insider Preview ..
So I have approximately 5000 images (27 gb) on my local computer that I am using to train a 5 class CNN. I have just been training it on my cpu which is not very practical as i want to experiment with more models. Note: I am using python, tensorflow, and a macbook pro. I ..