How to specify run arguments for a docker container running on kubernetes

  docker, kubernetes

I am kind of stuck with running a docker container as part of a kubernetes job and specifying runtime arguments in the job template.

My Dockerfile specifies an entrypoint and no CMD directive:

ENTRYPOINT ["python", "script.py"]

From what I understand, this means that when running the docker image and specifying arguments, the container will run using the entrypoint specified in the Dockerfile and pass the arguments to it. I can confirm that this is actually working, because running the container using docker does the trick:

docker run --rm image -e foo -b bar

In my case this will start script.py, which is using argument parser to parse named arguments, with the intended arguments.

The problem starts to arise when I am using a kubernetes job to do the same:

apiVersion: batch/v1
kind: Job
metadata:
  name: pipeline
spec:
  template:
    spec:
      containers:
      - name: pipeline
        image: test
        args: ["-e", "foo", "-b", "bar"]

In the pod that gets deployed the correct entrypoint will be run, but the specified arguments vanish. I also tried specifying the arguments like this:

args: ["-e foo", "-b bar"]

But this didn’t help either. I don’t know why this is not working, because the documentation cleary states that: "If you supply only args for a Container, the default Entrypoint defined in the Docker image is run with the args that you supplied.". The default entrypoint is running, that is correct, but the arguments are lost between kubernetes and docker.

Does somebody know what I am doing wrong?

Source: Docker Questions

LEAVE A COMMENT