I am trying to make tensorflow with my AMD GPU, I have been searching and trying for days, and finally i got out with tensorflow-rocm, which will be great (if it works :/) but sadly i followed many guides and many instructions with no result, the last tutorial i tried was this https://github.com/RadeonOpenCompute/ROCm-docker/blob/master/quick-start.md All the ..
I’m currently working on my new project. My boss asked me to make an object detection model. Okay, object detection is not a problem. But the problem is, I need to make the model which can get more than 2 video input simultaneously. yeah, this is a kind of AI CCTV. I hope CCTVs are ..
I try to follow the steps of the tutorial An introduction to deep learning on remote sensing images (Tutorial) using OTBTF library. My normalisation is made using two Radar images (vv and vh polarities). I run otbtf_PatchesExtraction to extract patches of size 16×16. I then so expect to have as input of my model a ..
So, I’m trying to run urw7rs/spiralpp Which is a Deep learning project. First I installed their docker image by this command: docker run -it -p 8888:8888 urw7rs/spiralpp:latest /bin/bash And here is the output of ls from inside the Docker image: (spiralpp) [email protected]:/src/spiralpp# ls CODE_OF_CONDUCT.md LICENSE dataset.lock libtorchbeast.egg-info pyproject.toml setup.py third_party CelebAMask-HQ README.md demo.ipynb nest requirements.txt ..
I am trying to build a docker image to train a deep neural network. The server that I run the docker has Nvidia GPU. First, I cloned the image from dockerhub to test the GPU is enabled. This is the script I was using to find the devices. import tensorflow as tf print("Number of CPUs ..
I have a python machine learning script for wich I need special hardware (a 8-GPU machine or Tensor Processing units). Therefore I run the code on a cloud server. Using Containers as known in docker looks like a very convenient way to execute the deep learning pytorch code. But there will likely be errors. How ..
I tried building a detectron2 image with docker, in order to use with AWS SageMaker. The dockerfile looks like this: ARG REGION="eu-central-1" FROM 763104351884.dkr.ecr.$REGION.amazonaws.com/pytorch-training:1.6.0-gpu-py36-cu101-ubuntu16.04 RUN pip install –upgrade torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html ############# Detectron2 section ############## RUN pip install –no-cache-dir pycocotools~=2.0.0 –no-cache-dir https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.6/detectron2-0.4%2Bcu101-cp36-cp36m-linux_x86_64.whl ENV FORCE_CUDA="1" # Build D2 only for Volta architecture – V100 chips ..
I am using repo2docker to build a Docker image of a GitHub repo. The resulting image needs to include CUDA-CNN so that containers can use GPUs on a host if they are available. As I understand it, I basically need to use nvidia/cuda as a base image. However, there is an open bug in repo2docker ..
Has anyone encountered issues with docker 3.2.2 on macOS Big Sur and knows how to fix them? Symptoms: images that build and run successfully on Catalina refuse to build on Big Sur with various errors. Seems like the issue is related to python deep learning libraries that are wrapping c++ implementation. For example, I had ..
I have a chatbot build using PyTorch, whenever I make any adjustments say adding more data to the intent.json file, I have to retrain first, stop the flask server then restart the flask server in order for the saved model file be loaded back to memory. How do I ensure I can retrain the model ..