I am using the GitLab CI/CD pipeline with the docker executor to check out hardware for testing using SSH, blocking other users/process from accessing during the session. Upon completion, the hardware is released for the next user. If I
cancel the job before the runner is finished the job is killed and not allowed to gracefully shutdown, leaving the hardware in a checked out state.
From the docker logs of the runner I can see that a
403 error is received when selecting cancel from the UI, however when I run commands to the docker image locally it works fine. For instance,
docker kill <container-name> --signal=SIGTERM from the command line works fine. Below are the logs from the docker container after clicking
Checking for jobs... received job=23857 repo_url=*** runner=w7yc61Bu WARNING: Appending trace to coordinator... aborted code=403 job=23857 job-log= job-status=canceled runner=w7yc61Bu sent-log=1702-1767 status=403 Forbidden WARNING: Job failed: canceled duration=48.963024553s job=23857 project=21 runner=w7yc61Bu WARNING: Appending trace to coordinator... aborted code=403 job=23857 job-log= job-status=canceled runner=w7yc61Bu sent-log=1702-2119 status=403 Forbidden WARNING: Submitting job to coordinator... aborted code=403 job=23857 job-status=canceled runner=w7yc61Bu WARNING: Failed to process runner builds=0 error=canceled executor=docker runner=w7yc61Bu
Peeking as some of the gitlab-runner source code, there seems to be functions for
Cancel that do literally nothing, and issue #3031 seems to hit on this topic as well. However, I forced my Dockerfile to build with ENTRYPOINT set to use dumb-init for passing through signals to child processes, but I am still stumped as how to catch the signal coming from the UI. Is there a way to expose access to the docker image that is spun up for the job?