Why are dependencies installed via an ENTRYPOINT and tini?

  dask, docker, tini

I have a question regarding an implementation of a Dockerfile on dask-docker.

FROM continuumio/miniconda3:4.8.2

RUN conda install --yes 
    -c conda-forge 
    python==3.8 
    [...]
    && rm -rf /opt/conda/pkgs

COPY prepare.sh /usr/bin/prepare.sh

RUN mkdir /opt/app

ENTRYPOINT ["tini", "-g", "--", "/usr/bin/prepare.sh"]

prepare.sh is just facilitating installation of additional packages via conda, pip and apt.

There are two things I don’t get about that:

  1. Why not just place those instructions in the Dockerfile? Possibly indirectly (modularized) by COPYing dedicated files (requirements.txt, environment.yaml, …)
  2. Why execute this via tini? At the end it does exec "[email protected]" where one can start a scheduler or worker – that’s more what I associate with tini.

This way everytime you run the container from the built image you have to repeat the installation process!?

Maybe I’m overthinking it but it seems rather unusual – but maybe that’s a Dockerfile pattern with good reasons for it.


optional bonus questions for Dask insiders:

  • why copy prepare.sh to /usr/bin (instead of f.x. to /tmp)?
  • What purpose serves the created directory /opt/app?

Source: Docker Questions

One Reply to “Why are dependencies installed via an ENTRYPOINT and tini?”

  • It is a bad pattern. Indeed it is better to do as much as possible at image creation time, and run package installations at image creation time. Not an runtime (which is slow and can fail too).

LEAVE A COMMENT