I have a Dockerfile that does a
pip install of a package from an AWS code artifact. The install requires an auth token, so my current approach is to generate the dynamic/secret repo url in a build script and pass it into Docker as a build arg, which leads to lines like this in my Dockerfile:
ARG CORE_REPO_URL ARG CORE_VERSION RUN pip install -i $CORE_REPO_URL mylib_core==$CORE_VERSION
The use of ARGs in a
RUN command cause that layer to never be cached, and therefore this part gets rebuilt every time even if the library version did not change.
Is there a better way to do this such that the layer cache would be used unless the
Maybe I should be installing the
aws tool chain in the image so the dynamic repo url can be generated in there in an earlier step (using the same command every time so it wouldn’t require an ARG and would hopefully cache the layer)? One downside of this is having to put AWS credentials in the image. I could maybe involve
docker secrets to avoid that if that’s the only solution though.
Source: Docker Questions