I have deployed some services in azure instances, these containers will be turned on one or two days a week then turned off for the remaining days. These containers have a Flask API that I’m accessing through the IP, the problem I’m having is that when I restart a container the public IP is not ..
When I was training my model (using Jupiter notebook), I got the message in Jupyter like “Kernel died…”. when tried to restore everything, it didn’t work. Then I decided to restart my container, however, I got the following message: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused “exec: “jupyter”: executable ..
I’m running Confluent Docker. I am trying to add more brokers using docker-compose. (I tried docker-compose scale kafka=3 but didn’t work for me.) This is my docker-compose.yml. version: ‘2’ services: broker: image: confluentinc/cp-enterprise-kafka:5.3.1 hostname: broker container_name: broker depends_on: – zookeeper ports: – “29092:29092” – “9092:9092” environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: ‘zookeeper:2181’ KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092 KAFKA_METRIC_REPORTERS: ..
I want to deploy my application to HEROKU when I push to master via my bitbucket repo. I have the bitbucket-pipeline.yml file set-up which doesn’t seem to have any syntax errors. But the build fails while reading my $HEROKU_API_KEY. This key is in my .env file and logs to the console when I log it ..
I took https://github.com/fraigo/docker-php4 and modified the Dockerfile thusly: # #—–[ OPEN ]—————————————— # Dockerfile # #—–[ FIND ]—————————————— # && wget –no-check-certificate https://curl.haxx.se/download/archeology/curl-7.12.0.tar.gz # #—–[ BEFORE, ADD ]———————————– # && wget https://www.openssl.org/source/old/0.9.x/openssl-0.9.8e.tar.gz && tar zxvf openssl-0.9.8e.tar.gz && cd openssl-0.9.8e && CFLAGS=-fPIC ./config shared –prefix=/opt –openssldir=/opt no-asm && make && make install # && cd /tmp/install ..
So I’m pretty decent at using nginx and dotnetcore to setup a secure server on bare metal or a virtual machine. My new company uses Azure and I mostly use AWS at home. I’m still a little green when it comes to docker and containerizing applications. I tested a container locally that works, this is ..
I'm setting up the docker infrastructure on my home server and am using macvlan networking for every container. (This avoids NAT and port-mapping, allows me to use IPv6, I can assign static IPs with names in dnsmasq's hosts file, etc.) One thing I'd like is to access services that normally run on "non-standard" ports on ..
I intend to use version v2.0 of Traefik and have therefore tested something first. If I want to use the Swarm Mode, I always get the following log entries: time=”2019-09-22T21:30:28Z” level=error msg=”Provider connection error Cannot connect to the Docker daemon at tcp://127.0.0.1:2377. Is the docker daemon running?, retrying in 507.606314ms” providerName=docker My docker.yml file: version: ..
Consider i have this docker-compose.yml version: ‘3’ services: webapp: image: some_image:repo container_name: my_container ports: – “8080:8080” #entrypoint : /home/my_tool/ command: bash -c “/home/my_tool/runSomeScript.sh argument1 argument2 && /bin/bash” stdin_open: true tty: true And this is my script inside the image/container that will be running. $ cat runSomeScript.sh #!/bin/bash echo “First arg: $1” echo “Second arg: $2” ..
I’m having problem because i’ve installed & started docker as a “bad_user”. The problem is that this machine generates static files (its jekyll/jekyll image), and thoose files are owned by “bad_user” so i cannot edit them (i know i could add myself to bad_user group or own the dir by chown -R but it would ..