Starting and populating a Postgres container in Do

2019-01-13 06:01发布

问题:

I have a Docker container that contains my Postgres database. It's using the official Postgres image which has a CMD entry that starts the server on the main thread.

I want to populate the database by running RUN psql –U postgres postgres < /dump/dump.sql before it starts listening to queries.

I don't understand how this is possible with Docker. If I place the RUN command after CMD, it will of course never be reached because Docker has finished reading the Dockerfile. But if I place it before the CMD, it will run before psql even exists as a process.

How can I prepopulate a Postgres database in Docker?

回答1:

After a lot of fighting, I have found a solution ;-)

For me was very useful a comment posted here: https://registry.hub.docker.com/_/postgres/ from "justfalter"

Anyway, I have done in this way:

# Dockerfile
FROM postgres:9.4

RUN mkdir -p /tmp/psql_data/

COPY db/structure.sql /tmp/psql_data/
COPY scripts/init_docker_postgres.sh /docker-entrypoint-initdb.d/

db/structure.sql is a sql dump, useful to initialize the first tablespace.

Then, the init_docker_postgres.sh

#!/bin/bash

# this script is run when the docker container is built
# it imports the base database structure and create the database for the tests

DATABASE_NAME="db_name"
DB_DUMP_LOCATION="/tmp/psql_data/structure.sql"

echo "*** CREATING DATABASE ***"

# create default database
gosu postgres postgres --single <<EOSQL
  CREATE DATABASE "$DATABASE_NAME";
  GRANT ALL PRIVILEGES ON DATABASE "$DATABASE_NAME" TO postgres;
EOSQL

# clean sql_dump - because I want to have a one-line command

# remove indentation
sed "s/^[ \t]*//" -i "$DB_DUMP_LOCATION"

# remove comments
sed '/^--/ d' -i "$DB_DUMP_LOCATION"

# remove new lines
sed ':a;N;$!ba;s/\n/ /g' -i "$DB_DUMP_LOCATION"

# remove other spaces
sed 's/  */ /g' -i "$DB_DUMP_LOCATION"

# remove firsts line spaces
sed 's/^ *//' -i "$DB_DUMP_LOCATION"

# append new line at the end (suggested by @Nicola Ferraro)
sed -e '$a\' -i "$DB_DUMP_LOCATION"

# import sql_dump
gosu postgres postgres --single "$DATABASE_NAME" < "$DB_DUMP_LOCATION";


echo "*** DATABASE CREATED! ***"

So finally:

# no postgres is running
[myserver]# psql -h 127.0.0.1 -U postgres
psql: could not connect to server: Connection refused
    Is the server running on host "127.0.0.1" and accepting
    TCP/IP connections on port 5432?

[myserver]# docker build -t custom_psql .
[myserver]# docker run -d --name custom_psql_running -p 5432:5432 custom_psql

[myserver]# docker ps -a
CONTAINER ID        IMAGE                COMMAND                CREATED             STATUS              PORTS                    NAMES
ce4212697372        custom_psql:latest   "/docker-entrypoint.   9 minutes ago       Up 9 minutes        0.0.0.0:5432->5432/tcp   custom_psql_running

[myserver]# psql -h 127.0.0.1 -U postgres
psql (9.2.10, server 9.4.1)
WARNING: psql version 9.2, server version 9.4.
         Some psql features might not work.
Type "help" for help.

postgres=# 

# postgres is now initialized with the dump

Hope it helps!



回答2:

Alternatively, you can just mount a volume to /docker-entrypoint-initdb.d/ that contains all your DDL scripts. You can put in *.sh, *.sql, or *.sql.gz files and it will take care of executing those on start-up.

e.g. (assuming you have your scripts in /tmp/my_scripts)

docker run -v /tmp/my_scripts:/docker-entrypoint-initdb.d postgres


回答3:

There is yet another option available that utilises Flocker:

Flocker is a container data volume manager that is designed to allow databases like PostgreSQL to easily run in containers in production. When running a database in production, you have to think about things like recovering from host failure. Flocker provides tools for managing data volumes across a cluster of machines like you have in a production environment. For example, as a Postgres container is scheduled between hosts in response to server failure, Flocker can automatically move its associated data volume between hosts at the same time. This means that when your Postgres container starts up on a new host, it has its data. This operation can be accomplished manually using the Flocker API or CLI, or automatically by a container orchestration tool that Flocker is integrates with, for example Docker Swarm, Kubernetes or Mesos.



回答4:

I was able to load the data in by pre-pending the run command in the docker file with /etc/init.d/postgresql. My docker file has the following line which is working for me:

RUN /etc/init.d/postgresql start && /usr/bin/psql -a < /tmp/dump.sql