python docker SDK inside of the
aiohttp based application. My goal is to start a container when a request comes, wait until it finish all work and return a response. I want to handle many of such requests at the same time.
I run container in the detached mode so the problem isn’t in starting container itself. The problem is how to define when the container finished the work.
Container instance of the
docker package has a method
wait() but it is blocking so I can’t use it "just like that". So I came up with something else: I start a container, create new asyncio task and (in that task) I’m checking whether container changed its state. It looks more or less like that:
import docker import asyncio import time def wait_for_finish(event, container): try: container.reload() timeout = time.time() + 600 while time.time() < timeout: if container.status == 'running' or container.status == 'created': await asyncio.sleep(0.5) try: container.reload() except requests.exceptions.HTTPError: break else: break finally: event.set() docker_client = docker.from_env() container = docker_client.containers.run(**kwargs) event = asyncio.Event() asyncio.create_task( wait_for_finish(event, container) ) await event.wait() event.clear()
It is simple but it works fine. But my question is: is such "dummy" way of waiting for the finish of the container (status different than
created) is a good method? To be honest I don’t like this solution, because of few reasons and I think something is wrong with such "waiting" but I don’t know what exacly…
I’m aware of the solutions such as aiodocker or the way with run_in_executor/threading or even replacing
docker sdk with
subprocess but for now I would like to use – if it is possible – combination of
docker sdk without for example thereading
Source: Docker Questions