I would like to run integration and end-to-end tests with a database in a known state for each run, to make the tests independent and repeatable. An easy way of doing this is to use docker-compose to create a database container which loads the scheme and data from a dump file each time. However, this is far too slow to restore the database for every test.
A better way seems to be to restore the database once in a docker container or volume, and then copy (mount?) the container/volume database folder into the database container that the test will use, and have each test re-copy/mount the container/volume so that it is fresh.
However, I am not sure what the best way to do this with docker-compose is. Could anyone provide a minimal example or explanation as to how to do this?
You can start the database using a host directory for its underlying data store. If you do this, then you can create a tar file of the directory, and untar it anew for each test run.
mkdir mysql
docker run -d -p 3306:3306 -v ./mysql:/var/lib/mysql --name mysql mysql
mysql -h 127.0.0.1 < dump.sql
docker stop mysql
docker rm mysql
tar czf mysql.tar.gz mysql
rm -rf mysql
tar xzf mysql.tar.gz
docker run -d -p 3306:3306 -v ./mysql:/var/lib/mysql --name mysql mysql
MYSQL_HOST=127.0.0.1 ./integration_test
docker stop mysql
docker rm mysql
You'd have to distribute the data dump separately (if you otherwise use AWS, an S3 bucket is a good place for it) but since it's "just" test data that you can always recreate from a database dump, it's not especially precious and you don't need to track its version history or attempt to keep it in source control.