Configure PostgreSQL#

Specify the version#

Set the version in the server’s Pillar file:

postgres:
  version: 11

Enable public access#

By default, PostgreSQL only allows local connections (see the template for the pg_bha.conf configuration file).

To enable public access, update the server’s Pillar file:

postgres:
  public_access: True

Add users, groups and databases#

To configure the database for an application:

  1. Add a user for the application, in a private Pillar file, replacing PASSWORD with a strong password (uncheck Symbols) and USERNAME with a recognizable username:

    postgres:
      users:
        USERNAME:
          password: "PASSWORD"
    
  2. Create the database for the application, revoke all schema privileges from the public role, and grant all schema privileges to the new user. Replace DATABASE and USERNAME:

    postgres:
      users:
        USERNAME:
          password: "PASSWORD"
      databases:
        DATABASE:
          user: USERNAME
    
  3. If another application requires read-only access to the database, create a group and its privileges, replacing APPLICATION and SCHEMA:

    postgres:
      groups:
        - APPLICATION_read
      users:
        USERNAME:
          password: "PASSWORD"
      databases:
        DATABASE:
          user: USERNAME
          privileges:
            SCHEMA:
              - APPLICATION_read
    

    Note

    In most cases, the SCHEMA is public, and the DATABASE, APPLICATION and USERNAME are all the same.

  4. Add the private Pillar file to the top file entry for the application.

Note

To delete a PostgreSQL user, follow these instructions.

Configure PostgreSQL#

Note

Even if you don’t need to configure PostgreSQL, you must still set the following, in order for its SLS file to be automatically included:

postgres:
  configuration: False
  1. Put your configuration file in the salt/postgres/files/conf directory. To use the base configuration, insert {% include 'postgres/files/conf/shared.include' %} at the top of the file.

  2. Set postgres.configuration in the server’s Pillar file:

    postgres:
      configuration: kingfisher-process1
    
  3. If you use the base configuration:

    • Set storage to either ssd (solid-state drive, default) or hdd (hard disk drive).

    • Set type to either oltp (online transaction processing, default) or dw (data warehouse).

    • If you need more connections, set max_connections.

    postgres:
      configuration: registry
      storage: hdd
      type: oltp
      max_connections: 200
    
  4. Set vm.nr_hugepages in the server’s Pillar file, following PostgreSQL’s instructions:

    vm:
      nr_hugepages: 1234
    
  5. Deploy the service

The configuration file will be in the /etc/postgresql/11/main/conf.d/ directory on the server (for PostgreSQL version 11).

Set up backups#

We use pgBackRest to create and manage offsite backups. Salt will install and configure pgBackRest if postgres:backup is defined in Pillar data.

  1. Create an S3 bucket and API Keys.

    Note

    pgBackRest supports any S3-compatible storage, including AWS and BackBlaze.

    If you are using AWS you will need to create an S3 Bucket and set up an IAM user.

    You can find an example IAM permissions policy in the pgBackRest documentation.

  2. Create pgbackrest pillar config.

    postgres:
      backup:
        # The configuration file for pgbackrest, this is loaded from ``salt/postgres/files/pgbackrest/``.
        configuration: kingfisher-process1
        # Unique identifier for backup configuration
        stanza: kingfisher
        # Concurrent processes for run pgbackrest with (backup speed vs CPU usage).
        # Optional.
        process_max: 4
        # Backup bucket region.
        s3_region: eu-central-1
        # Backup bucket name.
        s3_bucket: ocp-db-backup
        # s3 endpoint - `AWS S3 endpoints <https://docs.aws.amazon.com/general/latest/gr/s3.html>`__.
        s3_endpoint: s3.eu-west-1.amazonaws.com
        # API Access Key.
        s3_key: redacted
        # API Secret Key.
        s3_key_secret: redacted
        # Total full backups to store.
        total_full_backups: 4
        # Backup directory structure.
        repo_path=/kingfisher
    

    Note

    Incremental backups are taken daily (storing only the changes since the last full backup). Full backups are taken weekly, currently this runs on Sunday. So if total_full_backups is set to 4, backups will be stored for four weeks.

  3. Create stanza.

    If this backup stanza has already been created you can skip this step.

    su - postgres
    pgbackrest stanza-create --stanza=example
    

Note

For information on using the pgbackrest tool to restore data, see Restore from backup.

Additional steps for replica servers#

When pgbackrest runs it will try backing up PostgreSQL data from a replica/standby server if any are configured. This is great because it gives us a backup of production while also reducing load during the backup.

Note

You can find the recovery steps here.

  1. Log into the main (replication source) server

  2. Swap to the postgres user

    su - postgres
    
  3. Generate new SSH keys (if they do not already exist)

    ssh-keygen -t rsa -b 4096
    

    This creates both a public (~/.ssh/id_rsa.pub) and private key (~/.ssh/id_rsa)

  4. Add these new keys in deploy pillar

    1. Add the public key to authorized_keys on the replica server

      ssh:
        postgres:
          - ssh-rsa AAAB3N...
      
    2. Add the private key to deploy-pillar-private.

      postgres:
        ssh_key: |
          -----BEGIN RSA PRIVATE KEY-----
          ...
      
    3. Deploy the service

Set up replication#

To configure a main server and a replica server:

  1. Create configuration files for each server, as above. For reference, see the files for kingfisher-process1 and kingfisher-replica1.

  2. Add the replica’s IP addresses to the main server’s Pillar file:

    postgres:
      replica_ipv4:
        - 148.251.183.230
      replica_ipv6:
        - 2a01:4f8:211:de::2
    
  3. Add the replica user to the main server’s private Pillar file:

    postgres:
      users:
        replica:
          password: example_password
          replication: True
    

    You will also need to pass this user to the replica server. This is used to populate the recovery.conf file via pgbackrest.

    postgres:
      replication:
        username: replica
        password: example_password
        primary_slot_name: replica1
    

    Note

    If the replica user’s password is changed, you must manually update the /var/lib/postgresql/11/main/recovery.conf file on the replica server (for PostgreSQL version 11).

  4. Add the postgres.main state file to the main server’s target in the salt/top.sls file.

  5. Deploy both servers

  6. Connect to the main server as the root user, and create a replication slot, replacing SLOT with the value of postgres:replication:primary_slot_name.

    su - postgres
    psql -c "SELECT * FROM pg_create_physical_replication_slot('SLOT');"
    
  7. Transfer data and start replication.

    1. Connect to the replica server as the root user.

    2. Stop the PostgreSQL service and delete the main cluster’s data.

      systemctl stop postgresql
      rm -rf /var/lib/postgresql/11/main
      
    3. Switch to the postgres user and transfer PostgreSQL data.

      su - postgres
      mkdir /var/lib/postgresql/11/main
      pgbackrest --stanza=example --type=standby restore
      
    4. Switch to the root user and start the PostgreSQL service.

      exit
      systemctl start postgresql
      
    5. Double-check that the service started:

      pg_lsclusters