Processing Jobs with k8s with multples containers

  containers, docker, jobs, kubernetes, python

I need to process multiple files in s3 with k8s, so I’ve created one job on k8s and contains approximately 500 containers, for each container with different envs. However, the job is very slowly and in multiple times it failed.

I’m using kubernetes api python to submit job, likes this:

def read_path():
    files = []
    suffixes = ('_SUCCESS', 'referen/')

    files_path = fs.listdir(f"s3://{path}")
    if files_path is not None:
        for num, file in enumerate(files_path, start=1):
            if not file['Key'].endswith(suffixes):
                files.append(file['Key'])
    return files

def from_containers(container, path):
    containers = []
    for num, file in enumerate(read_path(container, path), start=1):
        containers.append(client.V1Container(name=f'hcm1-{num}', image='image-python',
                                             command=['python3', 'model.py'],
                                             env=[client.V1EnvVar(name='model', value='hcm1'),
                                                  client.V1EnvVar(name='referen', value=f"s3://{file}")]))


template = client.V1PodTemplateSpec(metadata=client.V1ObjectMeta(name="hcm1"),
                                    spec=client.V1PodSpec(restart_policy="OnFailure",
                                    containers=from_containers(containers, path),
                                    image_pull_secrets=[client.V1LocalObjectReference(name="secret")]))

spec = client.V1JobSpec(template=template, backoff_limit=20)

job = client.V1Job(api_version="batch/v1", kind="Job", metadata=client.V1ObjectMeta(name="hcm1"), spec=spec)

api_response = batch_v1.create_namespaced_job(body=job, namespace="default")
print("Job created. status='%s'" % str(api_response.status))

I’m tried to use some config, like completions=10, concurrency=2, but my job multiple number of times to execute, 10*500=5000

What’s better way to create job on k8s with multiples containers?

  1. 1 Job -> 1 Pod -> 500 containers
  2. 1 Job -> 500 Pods (Each pod 1 container).
  3. Is there another way?**

Source: Docker Questions

LEAVE A COMMENT