Configure PostgreSQL

Specify the version

Set the version in the server’s Pillar file:

postgres:
  version: 15

Enable public access

By default, PostgreSQL only allows local connections (see the template for the pg_bha.conf configuration file).

To enable public access, update the server’s Pillar file:

postgres:
  version: 15
  public_access: True

Add users, groups, databases and schemas

See also

Control access

Each service should have a service account. To configure the database for an application:

  1. Add a user for the application, in a private Pillar file, replacing PASSWORD with a strong password (uncheck Symbols) and USERNAME with a recognizable username:

    postgres:
      users:
        USERNAME:
          password: "PASSWORD"
    
  2. Create the database for the application, revoke all schema privileges from the public role, and grant all schema privileges to the new user. Replace DATABASE and USERNAME:

    postgres:
      users:
        USERNAME:
          password: "PASSWORD"
      databases:
        DATABASE:
          user: USERNAME
    
  3. Create a schema, if needed by the application. Replace SCHEMA and OWNER, and change TYPE to user or group:

    postgres:
      users:
        USERNAME:
          password: "PASSWORD"
      databases:
        DATABASE:
          user: USERNAME
          schemas:
            SCHEMA:
              name: OWNER
              type: TYPE
    

    Note

    If the owner needs to be a group, create the group, replacing NAME:

    postgres:
      groups:
        - NAME
    
  4. If another application needs read-only access to the database, create a group and its privileges, replacing APPLICATION and SCHEMA:

    postgres:
      groups:
        - APPLICATION_read
      users:
        USERNAME:
          password: "PASSWORD"
      databases:
        DATABASE:
          user: USERNAME
          privileges:
            SCHEMA:
              APPLICATION_read:
    

    Note

    In most cases, the SCHEMA is public, and the DATABASE, APPLICATION and USERNAME are all the same.

  5. Add the private Pillar file to the top file entry for the application.

Configure PostgreSQL

Note

Even if you don’t need to configure PostgreSQL, you must still set the following, in order for its SLS file to be automatically included:

postgres:
  configuration: False
  1. Put your configuration template in the salt/postgres/files/conf directory. In most cases, you should use the shared configuration template.

  2. Set postgres.configuration in the server’s Pillar file:

    postgres:
      configuration:
        name: kingfisher-main1
        source: shared
        context:
          mykey: myvalue
    

    The keys of the context mapping are made available as variables in the configuration template.

  3. If you use the shared configuration template, under the context mapping:

    • If you need more or fewer than 100 connections, set max_connections (100, default).

    • Set storage to either ssd (solid-state drive, default) or hdd (hard disk drive).

    • Set type to either oltp (online transaction processing, default) or dw (data warehouse).

    • Set content to add content to the configuration file.

    postgres:
      configuration:
        name: registry
        source: shared
        context:
          max_connections: 300
          storage: hdd
          type: oltp
          content: |
            max_wal_size = 10GB
    
  4. Set vm.nr_hugepages in the server’s Pillar file, following PostgreSQL’s instructions:

    vm:
      nr_hugepages: 1234
    
  5. Deploy the server

The configuration file will be in the /etc/postgresql/11/main/conf.d/ directory on the server (for PostgreSQL version 11).

Use CA certificates

Note

Only do this if a third-party service requires CA certificates.

  1. Set the hostname for PostgreSQL:

    postgres:
      ssl:
        servername: postgres.kingfisher.open-contracting.org
    
  2. Configure the mod_md Apache module to copy the SSL certificates to PostgreSQL’s directory:

    apache:
      public_access: True
      modules:
        mod_md:
          MDMessageCmd: /opt/postgresql-certificates.sh
    
  3. Acquire the SSL certificates.

Set up full backups

pgBackRest is used to create and manage offsite backups.

  1. Create and configure an S3 backup bucket

  2. Create an IAM backup policy and user

  3. Create a *.conf configuration file in the salt/postgres/files/pgbackrest/ directory. In most cases, you should use the shared configuration.

  4. Install and configure pgBackRest. Add to the server’s Pillar file, for example:

    postgres:
      configuration:
        ...
        context:
          content: |
            ### pgBackRest
            # https://pgbackrest.org/user-guide.html#quickstart/configure-archiving
    
            # https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-WAL-LEVEL
            wal_level = logical
    
            # https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-ARCHIVE-MODE
            archive_mode = on
    
            # https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-ARCHIVE-COMMAND
            # https://pgbackrest.org/user-guide.html#async-archiving/async-archive-push
            archive_command = 'pgbackrest --stanza=kingfisher-2023 archive-push %p'
    
            # https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-WAL-SENDERS
            max_wal_senders = 4
      backup:
        type: pgbackrest
        configuration: shared
        # The rest are specific to your configuration file.
        stanza: kingfisher
        retention_full: 4
        repo_path: /kingfisher
        process_max: 4
        cron: |
            MAILTO=root
            # Daily incremental backup
            15 05 * * 0-2,4-6 postgres pgbackrest backup --stanza=kingfisher-2023
            # Weekly full backup
            15 05 * * 3 postgres pgbackrest backup --stanza=kingfisher-2023 --type=full 2>&1 | grep -v "unable to remove file.*We encountered an internal error\. Please try again\.\|expire command encountered 1 error.s., check the log file for details"
    

    Note

    max_wal_senders is set to 4, because pgBackRest and annotated.conf recommend a value of twice the number of potential future replicas. This value counts towards max_connections.

    Note

    The grep -v command means root receives mail if there is more than 1 error. To check whether the error message in the grep command is up-to-date:

  5. Add, in a private Pillar file:

    postgres:
      backup:
        s3_bucket: ocp-db-backup
        s3_region: eu-central-1
        s3_endpoint: s3.eu-west-1.amazonaws.com
        s3_key: ...
        s3_key_secret: ...
    
  6. Create the stanza, if it doesn’t exist yet:

    su -u postgres pgbackrest stanza-create --stanza=example
    

Configure the replica, if any

  1. SSH into the main server as the postgres user.

  2. Generate an SSH key pair, if one doesn’t already exist:

    ssh-keygen -t rsa -b 4096
    
  3. Add the public SSH key to the ssh.postgres list in the replica server’s Pillar file:

    ssh:
      postgres:
        - ssh-rsa AAAB3N...
    
  4. Set postgres.ssh_key in the main server’s private Pillar file to the private SSH key:

    postgres:
      ssh_key: |
        -----BEGIN RSA PRIVATE KEY-----
        ...
    
  5. Deploy the main server and replica server

Set up database-specific backups

Note

Only use database-specific backups if full backups would backup many GBs of unwanted data.

  1. Create and configure an S3 backup bucket

  2. Configure the AWS CLI

  3. In the server’s Pillar file, set postgres.backup.location to a bucket and prefix, postgres.backup.databases to a list of databases, and postgres.backup.type to “script”, for example:

    postgres:
      backup:
        type: script
        location: ocp-registry-backup/database
        databases:
          - spoonbill_web
          - pelican_frontend
    
  4. Deploy the server

Set up replication

To configure a main server and a replica server:

  1. Create configuration files for each server<pg-add-configuration>. Example: kingfisher-main1, kingfisher-replica1

  2. Add the replica’s IP addresses to the main server’s Pillar file:

    postgres:
      replica_ipv4:
        - 148.251.183.230
      replica_ipv6:
        - 2a01:4f8:211:de::2
    
  3. Add the replica user to the main server’s private Pillar file:

    postgres:
      users:
        replica:
          password: example_password
          replication: True
    

    You will also need to pass this user to the replica server. This is used to populate the postgresql.conf file via pgBackRest.

    postgres:
      replication:
        username: replica
        password: example_password
        primary_slot_name: replica1
    

    Note

    If the replica user’s password is changed, you must manually update the /var/lib/postgresql/11/main/postgresql.conf file on the replica server (for PostgreSQL version 11).

  4. Deploy both servers

  5. Connect to the main server as the root user, and create a replication slot, replacing SLOT with the value of postgres:replication:primary_slot_name.

    su - postgres
    psql -c "SELECT * FROM pg_create_physical_replication_slot('SLOT');"
    
  6. Transfer data and start replication.

    1. Connect to the replica server as the root user.

    2. Stop the PostgreSQL service and delete the main cluster’s data.

      systemctl stop postgresql
      rm -rf /var/lib/postgresql/11/main
      
    3. Switch to the postgres user and transfer PostgreSQL data.

      su - postgres
      mkdir /var/lib/postgresql/11/main
      pgbackrest --stanza=example --type=standby restore
      
    4. Switch to the root user and start the PostgreSQL service.

      exit
      systemctl start postgresql
      
    5. Double-check that the service started:

      pg_lsclusters