Using Django with db.sqlite3 with persistent volume in a Kubernetes pod outputs – django.db.utils.OperationalError: unable to open database file

  django, docker, kubernetes, persistent-volumes, sqlite

I’m trying to deploy a small Django app, that creates its own db.sqlite3 database, in a Kubernetes pod. When doing so without a persistent volume to save the db.sqlite3, it works fine but when trying to save it in a persistent volume, the pod outputs django.db.utils.OperationalError: unable to open database file

The first problem I had is when these commands:

python ./manage.py migrate
sh -c "envdir ${ENVDIR} python manage.py collectstatic"

were run in the Dockerfile. After mounting the volume, none of my files would be visible. I learned that K8s volumes behaved differently from docker volumes and my solution was to put the commands in a shell script and execute it in the CMD or ENTYPOINT. This created the files after mounting and were visible but still doesn’t help with the current problem.

I tried using a persistent volume, tried using a hostPath defined volume and even tried using an initContainer that has the same image as the app, only it first sets the permissions to 777 for db.sqlite3 and executes the two commands above, but it can’t even start for some reason.

Here’s the deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
  namespace: app-prod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      securityContext:
        runAsUser: 0
        runAsGroup: 0
        fsGroup: 0
        fsGroupChangePolicy: "OnRootMismatch"
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                - key: role
                  operator: In
                  values:
                  - on-demand-worker
      terminationGracePeriodSeconds: 30
      containers:
      - name: notation
        image: myimage
        imagePullPolicy: Always
        command: ["bash", "-c", "python ./manage.py runserver 0.0.0.0:8000"]
        ports:
        - containerPort: 8000
          name: http
        volumeMounts:
          - name: myapp-data
            mountPath: /app/db.sqlite3
            subPath: db.sqlite3
        securityContext:
            privileged: true
        #securityContext:
        #allowPrivilegeEscalation: false
      initContainers:
      - name: sqlite-data-permission-fix
        image: myimage
        command: ["bash","-c","chmod -R 777 /app && python ./manage.py migrate && envdir ./notation/.devenv python manage.py collectstatic"]
        volumeMounts:
          - name: myapp-data
            mountPath: /app/db.sqlite3
            subPath: db.sqlite3
        resources: {}

      volumes:
        - name: myapp-data
          persistentVolumeClaim:
            claimName: notation

The permissions I see on the host look good, 777 as I wanted them to be so I really don’t know what the problem is… Any help would be appreciated.

Source: Docker Questions

LEAVE A COMMENT