Docker – Disk space still full after pruning

I’ve a problem when pruning docker. After building images, I run “docker system prune –volumes -a -f” but it’s not releasing space from “/var/lib/docker/overlay2”. See below please

Before building the image, disk space & /var/lib/docker/overlay2 size:

    [email protected]:~/tmp/app$ df -hv
    Filesystem      Size  Used Avail Use% Mounted on
    udev            1.9G     0  1.9G   0% /dev
    tmpfs           390M  5.4M  384M   2% /run
    /dev/nvme0n1p1   68G   20G   49G  29% /
    tmpfs           2.0G  8.0K  2.0G   1% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
    tmpfs           390M     0  390M   0% /run/user/1000
    [email protected]:~/tmp/app$ sudo du -hs /var/lib/docker/overlay2
    8.0K    /var/lib/docker/overlay2

Building the image

    [email protected]:~/tmp/app$ docker build -f ./Dockerfile .
    Sending build context to Docker daemon  1.027MB
    Step 1/12 : FROM mhart/alpine-node:9 as base
    9: Pulling from mhart/alpine-node
    ff3a5c916c92: Pull complete 
    c77918da3c72: Pull complete 
    Digest: sha256:3c3f7e30beb78b26a602f12da483d4fa0132e6d2b625c3c1b752c8a8f0fbd359
    Status: Downloaded newer image for mhart/alpine-node:9
     ---> bd69a82c390b
    .....
    ....
    Successfully built d56be87e90a4

Sizes after image built:

    [email protected]:~/tmp/app$ df -hv
    Filesystem      Size  Used Avail Use% Mounted on
    udev            1.9G     0  1.9G   0% /dev
    tmpfs           390M  5.4M  384M   2% /run
    /dev/nvme0n1p1   68G   21G   48G  30% /
    tmpfs           2.0G  8.0K  2.0G   1% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
    tmpfs           390M     0  390M   0% /run/user/1000
    [email protected]:~/tmp/app$ sudo du -hs /var/lib/docker/overlay2
    3.9G    /var/lib/docker/overlay2
    [email protected]:~/tmp/app$ docker system prune -af --volumes
    Deleted Images:
    deleted: sha256:ef4973a39ce03d2cc3de36d8394ee221b2c23ed457ffd35f90ebb28093b40881
    deleted: sha256:c3a0682422b4f388c501e29b446ed7a0448ac6d9d28a1b20e336d572ef4ec9a8
    deleted: sha256:6988f1bf347999f73b7e505df6b0d40267dc58bbdccc820cdfcecdaa1cb2c274
    deleted: sha256:50aaadb4b332c8c1fafbe30c20c8d6f44148cae7094e50a75f6113f27041a880
    untagged: alpine:3.6
    untagged: [email protected]:ee0c0e7b6b20b175f5ffb1bbd48b41d94891b0b1074f2721acb008aafdf25417
    deleted: sha256:d56be87e90a44c42d8f1c9deb188172056727eb79521a3702e7791dfd5bfa7b6
    deleted: sha256:067da84a69e4a9f8aa825c617c06e8132996eef1573b090baa52cff7546b266d
    deleted: sha256:72d4f65fefdf8c9f979bfb7bce56b9ba14bb9e1f7ca676e1186066686bb49291
    deleted: sha256:037b7c3cb5390cbed80dfa511ed000c7cf3e48c30fb00adadbc64f724cf5523a
    deleted: sha256:796fd2c67a7bc4e64ebaf321b2184daa97d7a24c4976b64db6a245aa5b1a3056
    deleted: sha256:7ac06e12664b627d75cd9e43ef590c54523f53b2d116135da9227225f0e2e6a8
    deleted: sha256:40993237c00a6d392ca366e5eaa27fcf6f17b652a2a65f3afe33c399fff1fb44
    deleted: sha256:bafcf3176fe572fb88f86752e174927f46616a7cf97f2e011f6527a5c1dd68a4
    deleted: sha256:bbcc764a2c14c13ddbe14aeb98815cd4f40626e19fb2b6d18d7d85cc86b65048
    deleted: sha256:c69cad93cc00af6cc39480846d9dfc3300c580253957324872014bbc6c80e263
    deleted: sha256:97a19d85898cf5cba6d2e733e2128c0c3b8ae548d89336b9eea065af19eb7159
    deleted: sha256:43773d1dba76c4d537b494a8454558a41729b92aa2ad0feb23521c3e58cd0440
    deleted: sha256:721384ec99e56bc06202a738722bcb4b8254b9bbd71c43ab7ad0d9e773ced7ac
    untagged: mhart/alpine-node:9
    untagged: mhart/[email protected]:3c3f7e30beb78b26a602f12da483d4fa0132e6d2b625c3c1b752c8a8f0fbd359
    deleted: sha256:bd69a82c390b85bfa0c4e646b1a932d4a92c75a7f9fae147fdc92a63962130ff

    Total reclaimed space: 122.2MB

It’s releasing only 122.2 MB. Sizes after prune:

    [email protected]:~/tmp/app$ df -hv
    Filesystem      Size  Used Avail Use% Mounted on
    udev            1.9G     0  1.9G   0% /dev
    tmpfs           390M  5.4M  384M   2% /run
    /dev/nvme0n1p1   68G   20G   48G  30% /
    tmpfs           2.0G  8.0K  2.0G   1% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
    tmpfs           390M     0  390M   0% /run/user/1000
    [email protected]:~/tmp/app$ sudo du -hs /var/lib/docker/overlay2
    3.7G    /var/lib/docker/overlay2

As you can see, there are 0 containers/images:

    [email protected]:~/tmp/app$ docker ps -a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    [email protected]:~/tmp/app$ docker images -a
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

But the size of “/var/lib/docker/overlay2” has only decreased from 3.9G to 3.7G. If I build more than one image, it’s increses every time. This is the dockerfile I’m building:

    FROM mhart/alpine-    node:9 as base
    RUN apk add --no-cache make gcc g++ python
    WORKDIR /app
    COPY package.json /app
    RUN npm install --silent

    # Only copy over the node pieces we need from the above image
    FROM alpine:3.6
    COPY --from=base /usr/bin/node /usr/bin/
    COPY --from=base /usr/lib/libgcc* /usr/lib/libstdc* /usr/lib/
    WORKDIR /app
    COPY --from=base /app .
    COPY . .
    CMD ["node", "server.js"]

Why it’s not cleaning overlay2 folder? how can I handle this? is there a solution? is it a known bug?

Source: StackOverflow