I am developing a Rust application. And I am deploying a Intel NUC with docker. I am using Redis, RabbitMQ, Postgre sql with Rust application successfully on Intel NUC. And I can use Rust and MinIO on local development. But I can not access from rust application to MinIO server. But I can access MinIO ..
I am trying to add s3fs, UpSetPlot, Matplotlib python libraries to lambda layer but due to size limit unable to add it. Tried with S3 upload of ZIP file but that also exceeding limits. The requirements.txt is like matplotlib==3.3.4 mpld3==0.5.5 s3fs==2021.8.1 UpSetPlot==0.6.0 Source: Docker..
I am trying to build a docker container that runs a nodeJs app that uses @aws-sdk/client-s3 when i attempt to run the container i keep getting yarn run v1.22.5 $ node dist/main node:internal/modules/cjs/loader:1183 return process.dlopen(module, path.toNamespacedPath(filename)); ^ Error: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /usr/src/app/node_modules/aws-crt/dist/bin/linux-x64/aws-crt-nodejs.node) at Object.Module._extensions..node (node:internal/modules/cjs/loader:1183:18) at ..
I’ve created an app that bind mounts a folder with CSV files. These CSV files get semi-frequent updates and are stored within AWS S3. How can I update the data within the bind mount folder when the source data in S3 is updated? Or should I just create a new docker instance when I notice ..
I’m trying to configure docker registry v2 on a EKS cluster. I’d like to use S3 as storage backend with credential manage by service account but it seems that doesn’t work. I log in running POD to check permissions using: aws sts get-caller-identity aws s3 ls s3://BUCKET_NAME aws s3 cp s3://BUCKET_NAME/FILENAME aws s3api put-object –bucket ..
my application running in docker container. using below commands i am able to see logs. docker-compose logs celery_1 | . settings.celery.partial_bid_created_event celery_1 | . settings.celery.partial_bid_removed_event celery_1 | . settings.celery.process_sale_canceled_event celery_1 | . settings.celery.process_sale_created_event celery_1 | . settings.celery.process_sale_successful_event celery_1 | celery_1 | [2021-08-22 06:17:02,066: INFO/MainProcess] Connected to redis://redis:6379// celery_1 | [2021-08-22 06:17:02,080: INFO/MainProcess] mingle: searching for ..
I am trying to run my microservice app (Asp Net Core) with docker and trying to get it work with Amazon AWS S3 Buckets. I am a totally Noob using Amazon AWS so I read the documentation and made this work but without docker and now I am trying to use it with docker but ..
I have a quite complex system in Docker. Everything runs in a big docker-compose file. Previously everything runs on one (manager) node in my Docker Swarm so I have generated a CERT for my domain (with certbot) and I have used the below MinIO service in my compose file: object_storage: image: minio/minio:RELEASE.2020-12-10T01-54-29Z ports: – 9000:9000 ..
I am deploying a simple cronjob calling an image (running aws cli, boto3, and a zip) from my private repo. That image basically runs a backup of my server then dumps the logs in s3. The job worked perfectly using my previous python3 image. Now I am using the latest python3 from my repo and ..
I have a Django application and related services running on an EC2 ubuntu instance. The static files are being stored and fetched from S3 using django-storages library and are working fine when I start the services using docker-compose. But recently I tried to set it up on 2 different EC2 servers running under same security ..