6
I came across the same problem with Ruby on Rails. I had pretty much the same database config and I too used two linked containers for app and database (not directly, but through the Gitlab CI; under the hood it creates containers and links them). My environment variables had different names though: POSTGRES_HOST
, POSTGRES_PORT
, etc. Yet, your solution of explicitly defining POSTGRES_PORT
worked for me too! But I couldn’t leave it like that, I wanted to figure out why that helped and what caused the problem in the first place. So here’s what I found.
The error says: invalid port number: "tcp://172.17.0.3:5432"
. At first it may seem like a valid port 5432, but actually it is the whole string "tcp://172.17.0.3:5432"
that isn’t a valid port number. Something passed this URI instead of a port number to PostgreSQL, that’s what the error says. You connect via psycopg, I used pg gem, but both of them are wrappers around libpq C library, a part of PostgreSQL. Let’s take a look at it to see how we’re getting this error. There is a file fe-connect.c
which contains functions that parse connection options. And here’s the relevant code (from PostgreSQL 10, that’s the version I used):
/* Figure out the port number we're going to use. */
if (ch->port == NULL || ch->port[0] == '\0')
thisport = DEF_PGPORT;
else
{
thisport = atoi(ch->port);
if (thisport < 1 || thisport > 65535)
{
appendPQExpBuffer(&conn->errorMessage,
libpq_gettext("invalid port number: \"%s\"\n"),
ch->port);
goto keep_going;
}
}
It says: if ch->port
is NULL
or an empty string it means no port was provided with connection options, then let’s use DEF_PGPORT
, the precompiled default port, usually 5432; and if ch->port
is present, let’s convert it to int with atoi
and check if it’s between 1 and 65535.
If ch->port
is "tcp://172.17.0.3:5432"
, atoi(ch->port)
returns 0, it’s less than 1, so that’s how we get this error.
By the way, in recent PostgreSQL versions one would get a more informative error: invalid integer value "tcp://172.17.0.3:5432" for keyword "port"
. That’s because this commit replaced the above atoi
with a custom error-checking string conversion function.
Okay, this URI appears in place of a port number in libpq connection options. But how it gets there? Turns out, because of Docker.
Docker containers can have names, either auto generated or provided with --name
option to the run
command. When you use --link
option to link two containers, you specify the other container’s name and optionally an alias. By default the alias is the same as the name. Probably, your database container had a name/alias db
, mine was named postgres
(Gitlab by default names a container with its image name, in my case: postgres).
When you link containers, Docker defines a bunch of environment variables, these variables are named based on container names/aliases. One of the variables is <alias>_PORT
, it contains the URI of the container’s exposed port. Not just port number, but a complete URI (like the one you’d get from docker port <alias>
command). Here’s where you get "tcp://172.17.0.3:5432"
from, it is written by Docker to the DB_PORT
variable, because your database container happens to be named db
.
After all, possible solutions are:
- redefine
DB_PORT
variable after Docker links the containers (as you did), - rename
DB_PORT
variable in the config, - set another alias for the db container.
3
So to give it a little more background. The app is written in Django and here is the database config section:
DATABASES = {
'default': {
# Requests will be wrapped in a transaction automatically
# https://docs.djangoproject.com/en/1.10/topics/db/transactions/#tying-transactions-to-http-requests
'ATOMIC_REQUESTS': True,
'ENGINE': 'django.contrib.gis.db.backends.postgis',
'NAME': os.getenv('DB_NAME', 'postgres'),
'USER': os.getenv('DB_USER', 'postgres'),
'PASSWORD': os.getenv('DB_PASSWORD', 'secret'),
'HOST': os.getenv('DB_HOST', 'localhost'),
'PORT': os.getenv('DB_PORT', 5432),
'OPTIONS': {
'client_encoding': 'UTF8'
}
}
}
The psql
command from the entry script was connecting just fine, using the default 5432
port.
Now when the Django tried to open connection, it used the default value of 5432
from this os.getenv('DB_PORT', 5432)
call, as I did not set the DB_PORT
ENV explicitly, not seeing a reason to do so.
Being out of ideas I’ve set the DB_PORT
ENV explicitly in AWS ECS Task definition and … surprise, it worked! For whatever reason (maybe it was passed as str
instead of int
when set explicitly).
I confirmed it 2 times by adding / removing the ENV var definition from the task configuration.
- [Django]-Django_compressor error with Sass. Cannot @import files
- [Django]-Test failures ("no transaction is active") with Ghost.py
- [Django]-Django Context Processor Login In Issue
0
#!/bin/bash
set -e
cmd="$@"
if [ -z "$POSTGRES_USER" ]; then
export POSTGRES_USER=postgres
fi
export DATABASE_URL=postgres://$POSTGRES_USER:$POSTGRES_PASSWORD@postgres:5432/$POSTGRES_USER
function postgres_ready(){
python << END
import sys
import psycopg2
try:
conn = psycopg2.connect(dbname="$POSTGRES_USER", user="$POSTGRES_USER", password="$POSTGRES_PASSWORD", host="postgres")
except psycopg2.OperationalError:
sys.exit(-1)
sys.exit(0)
END
}
until postgres_ready; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - continuing..."
exec $cmd
- [Django]-Realizing rating in django
- [Django]-Programatically populate sample data for Django Image/File fields?
- [Django]-Making text bold with django
- [Django]-Upgrade 1.7.4 to django 1.8 south.db.mysql error, not using South
- [Django]-Implement zeromq publisher in django with celery (Broker redis)