I’m having trouble with executing my AWS commands on a gitlab runner. I’m running docker in docker. I then do pip install aws cli. THIS WORKS FOR MY DEV DEPLOY, but not when i try to push to production. So weird. Any Ideas? A snippet of the log: Pulling docker image docker:latest … Using docker ..
I have a CI/CD pipeline running in Gitlab. The first few lines of the pipeline configuration file .gitlab-ci.yml looks like this: # jobs can use following stages stages: – lint – build – test – deliver – publish image: python:3.7 This statement tells me that all CI/CD tests are going to be run in a ..
I’ve set up the dind image according to the instructions at https://hub.docker.com/_/docker How can the network be configured so that docker containers run from within the dind container will have access to the internet? Currently, wget, curl, and apk update work properly directly within a container run from the dind image but when something is ..
I have deployed a pool of self hosted GitHub runners as pods to my kubernetes cluster. Some of our pipelines contain jobs which run container actions. Is it possible to run those jobs in this type of runner? Docker in Docker is configured in the deployment, and I can build docker images and push them ..
I’m trying to mitigate the DockerHub pull limit by logging in to a DockerHub account in my gitlab-runner. I’m not using methods like Gitlab’s Dependency proxy because I would have to edit hundreds of files. I decided to log in to Docker in gitlab-runner. .gitlab-ci.yml: image: docker services: – docker:dind stages: – base docker-build: stage: ..
I’ve been binding the host docker socket and cli so that I can run docker and compose commands from within running containers for over a year without issue but since updating to docker version 20.10.7 and compose version 1.29.2 I can’t get my containerised environment to launch without the following error: invalid mount config for ..
I have mounted a volume shared to my service main. Now I am trying to mount that same volume to another container client, that is started with docker-compose up client from within the main container (Docker-in-Docker): version: "3.8" # set COMPOSE_PROJECT_NAME=default before running `docker-compose up main` services: main: image: rbird/docker-compose:python-3.9-slim-buster privileged: true entrypoint: docker-compose up ..
I need to have an ubuntu image and then run a build process using that image. All is well until the build gets to the point of doing docker build etc. Lets say I use the following to test this: Dockerfile FROM ubuntu:latest I then build that – docker build -t ubuntudkr . Next, I ..
I have my controller jenkins node running in kubernetes via kubernetes plugin and I provision a pod with 2 build containers to handle all of my stage builds, the actual operations are done using ansible playbooks. We are using a combination of Jenkinsfiles and groovy scripted pipeline templates to do the builds. Currently, I can ..
In my organization we use GitLab CI, and our Pipeline contains a Job which runs in a Docker image. The Job runs a script within the Docker image, and that script itself spins up a Docker container for testing purposes, making this a Docker-in-Docker situation. The GitLab Job definition is very similar to the following: ..