Differences between Docker and Kubernetes
In this blog, I will assume that you already have a working Kubernetes setup. Whether this is with MiniKube, K3s, a local complete Kubernetes setup or at a cloud provider like AWS, GCP or Azure, it doesn’t matter. Personally, I use our local Kubernetes environment. Kubernetes has a few advantages over Docker, including the self-healing bit. To make the differences between Docker and Kubernetes clear, I borrowed a nice image from the Internet.
The basics
Let’s start at the beginning. Therefore, in part one I will show a setup based on a standard PostgreSQL image, in blog part two (in 2 weeks) I will use a Kubernetes database operator (CloudnativePG). But now first: Plain PostgreSQL.
First you create a YAML file to define in it some things for the PostgreSQL pod / container. What things are needed and what are they for?
Secret
A secret is a clean way to store data such as passwords. In the case of PostgreSQL, you specify a user and password to use to connect to the database. Secrets can be used for multiple pods and contain only a small amount of data. For all the ins and outs about secrets, visit the Kubernetes page on secrets.
apiVersion: v1
kind: Secret
metadata:
name: postgres-env
type: Opaque
stringData:
POSTGRES_USER: example
POSTGRES_PASSWORD: verysecret
---
PersistentVolumeClaim
A PersistentVolumeClaim is a claim on a bit of space to use. It is attached to a pod so that data can be stored here. This has the advantage that if a pod crashes, the data is not lost, but taken into a new pod. You can find more info about PVCs and all the options available on the Kubernetes-website.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
Deployment
The deployment takes care of creating and managing the pods within the deployment. When you scale the deployment from one to two pods, the deployment causes the cube scheduler to create the second pod. If you modify something in the deployment, the pods will be replaced one by one with the desired modifications. In the deployment you also specify which volumes (claims) are to be used. If desired, you can also define other resources, such as CPU and RAM usage. More info can be found on the Kubernetes website.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgres
name: postgresql
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- envFrom:
- secretRef:
name: postgres-env
image: docker.io/postgres:14
name: postgresql
ports:
- containerPort: 5432
name: postgresql
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
In my case, it ends up looking like this:
Note the “---
” as I used it in the screenshot above. Don’t forget to add it!
Best practise
To keep everything clean and organized, create a separate namespace where you deploy this container. Since I don’t have a separate namespace for this blog yet, I’ll create it first:
kubectl create ns postgresdeploy
The namespace has been successfully created. Now it is time to apply the YAML file so that the Secret, PersistentVolumeClaim and Deployment are created:
kubectl apply -f postgresdeploy.yaml -n postgresdeploy
Don’t you get an error message? Then you did it right. If you do get an error message, it is usually due to a typo or indentation. YAML is very sensitive to the alignment of the various components in the file.
Check
Give the process a few minutes to create and start everything. After that you can check if everything is created and the container is ready and has as Running status.
kubectl get all -n postgresdeploy
What is noticeable is that the PVC is missing here. This is correct because a PVC is cluster-wide. Want to check if the PVC has been created? Then run the following command:
kubectl get pvc -n postgresdeploy
Connecting
Now you are going to connect to the container:
kubectl exec -it postgresql-d684bfd65-5cjzm -n postgresdeploy -- /bin/bash
You will then get the bash prompt. Now enter:
su - postgres
This will make you become the postgres user that is active within the postgres image by default. Next, type the following:
psql -U example
And voila, you are in postgres. There is no need to enter a password this way. Connecting externally does ask for the password. I then use \l+
to check that I am getting data back and everything is responding as it should.