#### Category : celery

I am working on a docker-based Celery-backed Python application in which one of the tasks is to trigger and send a text message given a number. The workflow is as follows: User uploads a CSV with a set of entries to whom the text message should be sent The cron job polls database every 60 ..

I have 2 different tasks defined and I would like to route each task to its corresponding queue. However, when I try to specify a queue using the -Q option, the tasks do not get consumed by Celery. However, everything works as expected if I run without the -Q option. I am using celery==4.2.0 Am ..

i deployed two docker files to Heroku (Celery, Django), but due to the fact that the volume is not scrapped as in docker-compose, Django tries to return the file (which was previously generated by Celery) and cannot find it. How can I solve the problem? Source: Docker..

I am trying to use celery with docker in plesk, plesk (no support for using docker-compose) can i run celery inside my container for production use? Source: Docker..

I’ve been using Celery for a while a now, in production I use RabbitMQ as the broker and Redis for the backend in a K8s cluster with no problems so far. Locally, I run a docker compose with a few services (Flask API, 2 different Workers, Beat, Redis, Flower, Hasura), using Redis as both the ..

I’ve been using Celery for a while a now, in production I use RabbitMQ as the broker and Redis for the backend in a K8s cluster with no problems so far. Locally, I run a docker compose with a few services (Flask API, 2 different Workers, Beat, Redis, Flower, Hasura), using Redis as both the ..

I need help in order to solve the problem about Celery executor fails. Below my architecture: Airflow 1.10.7 Airflow Scheduler, Webserver and Workers running on Docker over AWS EC2 instances S3Fuse 1.89 This is a screenshot taken from Flower: The cluster works well, but apparently without any reasons all the tasks scheduled on a Celery ..

Below is my Docker File, I am creating an image of it and starting my container: ENTRYPOINT ["/usr/bin/docker-entrypoint.sh"] My docker-entrypoint.sh is as follow : set -eo pipefail if [ "${#}" -ne 0 ]; then exec "${@}" else gunicorn –bind "0.0.0.0:\${SUPERSET_PORT}" –access-logfile ‘-‘ –error-logfile ‘-‘ –workers 10 –worker-class gthread -k gevent –threads 20 –timeout 6000 –limit-request-line ..