19đź‘Ť
Here’s a setup I’ve used.
pgbouncer running on same machine as gunicorn, celery, etc.
pgbouncer.ini:
[databases]
<dbname> = host=<dbhost> port=<dbport> dbname=<dbname>
[pgbouncer]
: your app will need filesystem permissions to this unix socket
unix_socket_dir = /var/run/postgresql
; you'll need to configure this file with username/password pairs you plan on
; connecting with.
auth_file = /etc/pgbouncer/userlist.txt
; "session" resulted in atrocious performance for us. I think
; "statement" prevents transactions from working.
pool_mode = transaction
; you'll probably want to change default_pool_size. take the max number of
; connections for your postgresql server, and divide that by the number of
; pgbouncer instances that will be conecting to it, then subtract a few
; connections so you can still connect to PG as an admin if something goes wrong.
; you may then need to adjust min_pool_size and reserve_pool_size accordingly.
default_pool_size = 50
min_pool_size = 10
reserve_pool_size = 10
reserve_pool_timeout = 2
; I was using gunicorn + eventlet, which is why this is so high. It
; needs to be high enough to accommodate all the persistent connections we're
; going to allow from Django & other apps.
max_client_conn = 1000
...
/etc/pgbouncer/userlist.txt:
"<dbuser>" "<dbpassword>"
Django settings.py:
...
DATABASES = {
'default': {
'ENGINE': 'django.contrib.gis.db.backends.postgresql_psycopg2',
'NAME': '<dbname>',
'USER': '<dbuser>',
'PASSWORD': '<dbpassword>',
'HOST': '/var/run/postgresql',
'PORT': '',
'CONN_MAX_AGE': None, # Set to None for persistent connections
}
}
...
If I remember correctly, you can basically have any number of “persistent” connections to pgbouncer, since pgbouncer releases server connections back to the pool when Django is done with them (as long as you’re using transaction
or statement
for pool_mode
). When Django tries to reuse its persistent connection, pgbouncer takes care of waiting for a usable connection to Postgres.
👤Seán Hayes
Source:stackexchange.com