I am trying to create a container for deep learning that contains a list of packages and supports both python 2 and 3. I created a Dockerfile that pulls from
nvidia/cuda:10.1-cudnn7-devel-centos7, then installs Miniconda, then creates python 2 and 3 environments like so:
RUN conda env create -f py2_env.yaml RUN conda env create -f py3_env.yaml
The environment files look like so:
name: py3 channels: - conda-forge - defaults dependencies: - _libgcc_mutex=0.1=main - _tflow_select=2.1.0=gpu - lots_of_other_packages - pip: - affine==2.3.0 - more_packages_here prefix: /opt/conda/envs/py3
The problem I have is that when I create the container I start in the base environment which doesn’t have all my packages, so I start in a python 3 environment but not the one with all my packages. I have to
conda activate py3 to get them. I would like to remove this step. I would like to either install all my packages directly into the base environment or start with the py3 environment activated. I’m tried this by adding this command to my Dockerfile:
RUN /bin/bash -c "conda init bash && source /root/.bashrc && conda activate py3 but it still starts in the base environment.