I took a Udemy course on Docker and Kubernetes. While it was an excellent course, one area I felt it didn't clarify enough was databases in development.
In production it was pretty clear and is consistent with other articles I've read: do not use Docker containers for databases in production–they are entirely too easy to destroy and thus lose data.
However, it showed using them development so I went back and forth with the instructor on it to get a better understanding.
I just felt like, why use Docker at all for databases? Why not just have a central test database on the network that developers connect to when they work onthe app?
His point for not doing that was valid:
- Often times engineers are working on the same feature, so they'd be competing for the same tables, possibly overwriting another engineers changes. They should have a local database they are working with and any changes to the database(s) should come from schema migrations.
Makes sense. But still, why use Docker locally for the database? Why not just install the database on the local computer?
His point, again, was pretty valid:
- You want the developer to develop; not installing and configuring a database to run with the application. They should be able to just sit down, clone the repo, run
docker-compose upand be on their way.
All great points.
I guess what I don't understand is how to set that up, particularly related to test data to work with, so I have some questions:
- Is it just an Alpine image that is downloaded and then SQL dump imported into the Alpine image and that is what the developer works with?
- Or, does the database image on Docker Hub already have the data to work with?
- Or, is there a local copy of the database file (
/var/lib/pgsql/data) and the image has the volume mapped to the local database files?
Lastly, should k8s be used during developement or just