Direct naar content

How to run Postgres on Docker part 3

In a previous post I deployed a PostgreSQL database in a container from a standard image. Now I’m going to look at deploying a PostgreSQL cluster. Well, kind of. PostgreSQL uses the term cluster to refer to “a collection of databases managed by a single PostgreSQL server instance”. I’m using it in the more general sense, to refer to High Availability.

Craig Healey

DBA Consultant en Database Reliability Engineer
Craig Healey - DBA Consultant en Database Reliability Engineer

Basic replication using a pre-built project

PostgreSQL has had streaming replication since 9.0, where the WAL logs are transferred between the Master database and the read-only Slave (see SeveralNines blog for more details). It’s fairly easy to set up, so what can Docker add? Well, you can use the docker-compose command, and docker-compose.yml file, to control a number of Docker containers. So the Master and Slave database will be treated as one unit.

I’m going to base this on Hamed Momeni’s Medium post, but with a few changes.

First of all, we need to create a docker network so our containers can communicate amongst themselves:

docker network create bridge-docker

Next, pull down the files and directories from BitBucket:

git clone https://bitbucket.org/CraigOptimaData/docker-pg-cluster.git

cd into the newly created directory named docker-pg-cluster.

cd docker-pg-cluster

Then you simply have to run the docker-compose up command:

docker-compose up

You’ll need to wait a while as it has a few things to do whilst building the master and slave images. You’ll see a couple of red lines warning that it has to delay package configuration, but that is normal. Eventually it will finish building the images and actually run them. At this point it should display the logs from pg_master_1 and pg_slave_1 as they are initialized. If everything has gone well, the logs will stop scrolling up the screen.