Configure PostgreSQL¶
Specify the version¶
Set the version in the server’s Pillar file:
postgres:
version: 15
Enable public access¶
By default, PostgreSQL only allows local connections (see the template for the pg_bha.conf configuration file).
To enable public access, update the server’s Pillar file:
postgres:
version: 15
public_access: True
Add users, groups, databases and schemas¶
See also
Each service should have a service account. To configure the database for an application:
Add a user for the application, in a private Pillar file, replacing
PASSWORD
with a strong password (uncheck Symbols) andUSERNAME
with a recognizable username:postgres: users: USERNAME: password: "PASSWORD"
Create the database for the application, revoke all schema privileges from the public role, and grant all schema privileges to the new user. Replace
DATABASE
andUSERNAME
:postgres: users: USERNAME: password: "PASSWORD" databases: DATABASE: user: USERNAME
Create a schema, if needed by the application. Replace
SCHEMA
andOWNER
, and changeTYPE
touser
orgroup
:postgres: users: USERNAME: password: "PASSWORD" databases: DATABASE: user: USERNAME schemas: SCHEMA: name: OWNER type: TYPE
Note
If the owner needs to be a group, create the group, replacing
NAME
:postgres: groups: - NAME
If another application needs read-only access to the database, create a group and its privileges, replacing
APPLICATION
andSCHEMA
:postgres: groups: - APPLICATION_read users: USERNAME: password: "PASSWORD" databases: DATABASE: user: USERNAME privileges: SCHEMA: APPLICATION_read:
Note
In most cases, the
SCHEMA
ispublic
, and theDATABASE
,APPLICATION
andUSERNAME
are all the same.Add the private Pillar file to the top file entry for the application.
Configure PostgreSQL¶
Note
Even if you don’t need to configure PostgreSQL, you must still set the following, in order for its SLS file to be automatically included:
postgres:
configuration: False
Put your configuration template in the salt/postgres/files/conf directory. In most cases, you should use the
shared
configuration template.Set
postgres.configuration
in the server’s Pillar file:postgres: configuration: name: kingfisher-main1 source: shared context: mykey: myvalue
The keys of the
context
mapping are made available as variables in the configuration template.If you use the
shared
configuration template, under thecontext
mapping:If you need more or fewer than 100 connections, set
max_connections
(100, default).Set
storage
to eitherssd
(solid-state drive, default) orhdd
(hard disk drive).Set
type
to eitheroltp
(online transaction processing, default) ordw
(data warehouse).Set
content
to add content to the configuration file.
postgres: configuration: name: registry source: shared context: max_connections: 300 storage: hdd type: oltp content: | max_wal_size = 10GB
Set
vm.nr_hugepages
in the server’s Pillar file, following PostgreSQL’s instructions:vm: nr_hugepages: 1234
The configuration file will be in the /etc/postgresql/11/main/conf.d/
directory on the server (for PostgreSQL version 11).
Use CA certificates¶
Note
Only do this if a third-party service requires CA certificates.
Set the hostname for PostgreSQL:
postgres: ssl: servername: postgres.kingfisher.open-contracting.org
Configure the mod_md Apache module to copy the SSL certificates to PostgreSQL’s directory:
apache: public_access: True modules: mod_md: MDMessageCmd: /opt/postgresql-certificates.sh
Set up full backups¶
See also
pgBackRest is used to create and manage offsite backups.
Create and configure an S3 backup bucket
Create a
*.conf
configuration file in thesalt/postgres/files/pgbackrest/
directory. In most cases, you should use theshared
configuration.Install and configure pgBackRest. Add to the server’s Pillar file, for example:
postgres: configuration: ... context: content: | ### pgBackRest # https://pgbackrest.org/user-guide.html#quickstart/configure-archiving # https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-WAL-LEVEL wal_level = logical # https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-ARCHIVE-MODE archive_mode = on # https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-ARCHIVE-COMMAND # https://pgbackrest.org/user-guide.html#async-archiving/async-archive-push archive_command = 'pgbackrest --stanza=kingfisher-2023 archive-push %p' # https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-WAL-SENDERS max_wal_senders = 4 backup: type: pgbackrest configuration: shared # The rest are specific to your configuration file. stanza: kingfisher retention_full: 4 repo_path: /kingfisher process_max: 4 cron: | MAILTO=root # Daily incremental backup 15 05 * * 0-2,4-6 postgres pgbackrest backup --stanza=kingfisher-2023 # Weekly full backup 15 05 * * 3 postgres pgbackrest backup --stanza=kingfisher-2023 --type=full 2>&1 | grep -v "unable to remove file.*We encountered an internal error\. Please try again\.\|expire command encountered 1 error.s., check the log file for details"
Note
max_wal_senders
is set to 4, because pgBackRest and annotated.conf recommend a value of twice the number of potential future replicas. This value counts towardsmax_connections
.Note
The
grep -v
command meansroot
receives mail if there is more than 1 error. To check whether the error message in thegrep
command is up-to-date:expire command encountered %u error(s), check the log file for details
“We encountered an internal error. Please try again.” is from AWS.
Add, in a private Pillar file:
postgres: backup: s3_bucket: ocp-db-backup s3_region: eu-central-1 s3_endpoint: s3.eu-west-1.amazonaws.com s3_key: ... s3_key_secret: ...
See also
Create the stanza, if it doesn’t exist yet:
su -u postgres pgbackrest stanza-create --stanza=example
See also
Configure the replica, if any¶
See also
SSH into the main server as the
postgres
user.Generate an SSH key pair, if one doesn’t already exist:
ssh-keygen -t rsa -b 4096
Add the public SSH key to the
ssh.postgres
list in the replica server’s Pillar file:ssh: postgres: - ssh-rsa AAAB3N...
Set
postgres.ssh_key
in the main server’s private Pillar file to the private SSH key:postgres: ssh_key: | -----BEGIN RSA PRIVATE KEY----- ...
See also
Set up database-specific backups¶
Note
Only use database-specific backups if full backups would backup many GBs of unwanted data.
Create and configure an S3 backup bucket
Configure the AWS CLI
In the server’s Pillar file, set
postgres.backup.location
to a bucket and prefix,postgres.backup.databases
to a list of databases, andpostgres.backup.type
to “script”, for example:postgres: backup: type: script location: ocp-registry-backup/database databases: - spoonbill_web - pelican_frontend
Set up replication¶
To configure a main server and a replica server:
Create configuration files for each server<pg-add-configuration>. Example: kingfisher-main1, kingfisher-replica1
Add the replica’s IP addresses to the main server’s Pillar file:
postgres: replica_ipv4: - 148.251.183.230 replica_ipv6: - 2a01:4f8:211:de::2
Add the
replica
user to the main server’s private Pillar file:postgres: users: replica: password: example_password replication: True
You will also need to pass this user to the replica server. This is used to populate the
postgresql.conf
file via pgBackRest.postgres: replication: username: replica password: example_password primary_slot_name: replica1
Note
If the
replica
user’s password is changed, you must manually update the/var/lib/postgresql/11/main/postgresql.conf
file on the replica server (for PostgreSQL version 11).Deploy both servers
Connect to the main server as the
root
user, and create a replication slot, replacingSLOT
with the value ofpostgres:replication:primary_slot_name
.su - postgres psql -c "SELECT * FROM pg_create_physical_replication_slot('SLOT');"
Transfer data and start replication.
Connect to the replica server as the
root
user.Stop the PostgreSQL service and delete the main cluster’s data.
systemctl stop postgresql rm -rf /var/lib/postgresql/11/main
Switch to the
postgres
user and transfer PostgreSQL data.su - postgres mkdir /var/lib/postgresql/11/main pgbackrest --stanza=example --type=standby restore
Switch to the
root
user and start the PostgreSQL service.exit systemctl start postgresql
Double-check that the service started:
pg_lsclusters