I am trying to distribute a set of connected applications running in several linked containers that includes a mongo database that is required to:
- be distributed containing some seed data;
- allow users to add additional data.
Ideally the data will also be persisted in a linked data volume container.
I can get the data into the mongo
container using a mongo
base instance that doesn't mount any volumes (dockerhub image: psychemedia/mongo_nomount
- this is essentially the base mongo Dockerfile without the VOLUME /data/db
statement) and a Dockerfile
config along the lines of:
ADD . /files
WORKDIR /files
RUN mkdir -p /data/db && mongod --fork --logpath=/tmp/mongodb.log && sleep 20 && \
mongoimport --db testdb --collection testcoll --type csv --headerline --file ./testdata.csv #&& mongod --shutdown
where ./testdata.csv
is in the same directory (./mongo-with-data
) as the Dockerfile.
My docker-compose config file includes the following:
mongo:
#image: mongo
build: ./mongo-with-data
ports:
- "27017:27017"
#Ideally we should be able to mount this against a host directory
#volumes:
# - ./db/mongo/:/data/db
#volumes_from:
# - devmongodata
#devmongodata:
# command: echo created
# image: busybox
# volumes:
# - /data/db
Whenever I try to mount a VOLUME it seems as if the original seeded data - which is stored in /data/db
- is deleted. I guess that when a volume is mounted to /data/db
it replaces whatever is there currently.
That said, the docker userguide suggests that: Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization? So I expected the data to persist if I placed the VOLUME command after the seeding RUN
command?
So what am I doing wrong?
The long view is that I want to automate the build of several linked containers, and then distribute a Vagrantfile
/docker-compose YAML file that will fire up a set of linked apps, that includes a pre-seeded mongo
database with a (partially pre-populated) persistent data container.
Here is a writeup of how we're using disposable containers to clean and seed images https://ardoq.com/delightful-database-seeding-with-docker/
To answer my own question:
config.vm.provision :shell, :inline => <<-SH docker exec -it -d vagrant_mongo_1 mongoimport --db a5 --collection roads --type csv --headerline --file /files/AADF-data-minor-roads.csv SH
to import the data.
Package the box.
Distribute the box.
For the user, a simple Vagrantfile to load the box and run a simple docker-compose YAML script to start the containers and mount the mongo db against the data volume container.
You can use this image that provides docker container for many jobs ( import, export , dump )
Look at the example using docker-compose
I do this using another docker container whose only purpose is to seed mongo, then exit. I suspect this is the same idea as ebaxt's, but when I was looking for an answer to this, I just wanted to see a quick-and-dirty, yet straightforward, example. So here is mine:
docker-compose.yml
mongo-seed/Dockerfile
mongo-seed/init.json
You can use Mongo Seeding Docker image.
Why?
Example usage with Docker Compose:
Disclaimer: I am the author of this library.
I have found useful to use Docker Custom Images and using volumes, instead of creating another container for seeding.
File Structure
DOCKERFILE
docker-compose.yml
seed.js
For more details on how to customize MongoDB Docker service, read this
Also, it is good to keep your passwords and usernames secure from Public, DO NOT push credentials on public git, instead use Docker Secrets. Also read this Tutorial on Secrets
Secrets can also be used in MongoDB Docker Services