Если вы видите что-то необычное, просто сообщите мне. Skip to main content

Postgres connection pool для Kubernetes

Motivation

If you develop apps using frameworks such as Django o RoR, you have probably faced the following problem/error.

FATAL: sorry, too many clients already

As you know, these kind of frameworks uses a database connection pool with the objective of to reduce the database connection latency.

That’s is great always which your database is optimized to have a lot of connections.

As you can infer, it’s not the case of Postgres.

Each Postgres connection uses about 10 MB, altough most of the time they are idle.

With the microservices boom or gRPC streams, things get worse. We have a lot of idle connections but Postgres requests a lot of resources for nothing, to put in a idle state.

PgBouncer to the rescue

There are serveral options to solve the multiple connections problem but all of them use the same pattern: a proxy in the middle.

The idea is that you connect your consumer software to a proxy which allows a lot of “cheap” connections, and this proxy connects with the Postgres database only when your app really need to perform any database action.

One of these solutions is PgBouncer.

It’s the oldest solution and its adopted widely.

Pgbouncer in your K8S cluster

To put pgbouncer in your cluster is easy as pie.

We use the following docker image/project: edoburu/pgbouncer

To use just we need to define a config and a secret K8S resources. We need to provide it the following Postgres database connection:

  • DB_HOST: the K8S service name of your Postgres pod/deployment.
  • DB_USER: the Postgres database user.
  • DB_PASSWORD: the Postgres database user (you should store it as a secret).
  • POOL_MODE: specific pgbouncer parameter. We use “transaction” mode because we only want to connect to Postgres when we really need it.
  • SERVER_RESET_QUERY: before a connection can be reused by other client, it’s important to reset any previous session set. DISCARD ALL if ok for Postgres 8.3+

Taking into account the above, the deployment manifest can be something as below

service-catalog-pgbouncer-configmap.yaml

apiVersion: v1

kind: ConfigMap
metadata:
  name: service-catalog-pgbouncer-env
  namespace: test

data:
  DB_HOST: 
  DB_PASSWORD: 
  DB_USER: 
  POOL_MODE: transaction
  SERVER_RESET_QUERY: DISCARD ALL

service-catalog-pgbouncer-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: service-catalog-pgbouncer-deployment
  namespace: test
  labels:
    app: service-catalog-pgbouncer-app
spec:
  selector:
    matchLabels:
      app: service-catalog-pgbouncer-app
  template:
    metadata:
      labels:
        app: service-catalog-pgbouncer-app
    spec:
      containers:
        -  image: edoburu/pgbouncer:1.9.0
           name: service-catalog-pgbouncer-pod
           ports:
           - containerPort: 5432
             name: pgbouncer-p
           securityContext:
             allowPrivilegeEscalation: false
             capabilities:
               drop:
               - all
           lifecycle:
             preStop:
               exec:
                 command:
                 - /bin/sh
                 - -c
                 - killall -INT pgbouncer && sleep 120
           envFrom:
           - configMapRef:
               name: service-catalog-pgbouncer-env

service-catalog-pgbouncer-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: service-catalog-pgbouncer-service
  namespace: test
spec:
  type: ClusterIP
  selector:
    app: service-catalog-pgbouncer-app
  ports:
  - name: pgbouncer
    port: 5432
    targetPort: pgbouncer-p

We can apply it using the standard CLI command (remember to create the config and/or secrets before!)

$ kubectl apply -f pgbouncer-deployment.yaml

Now, you just need to create the service to consume it.

We apply again the service manifest

$ kubectl apply -f pgbouncer-service.yaml

And that’s all!