What is best practice for sharing database between

2019-04-03 19:01发布

Is there anyone knows what is the best practice for sharing database between containers in docker?

What I mean is I want to create multiple containers in docker. Then, these containers will execute CRUD on the same database with same identity.

So far, I have two ideas. One is create an separate container to run database merely. Another one is install database directly on the host machine where installed docker.

Which one is better? Or, is there any other best practice for this requirement?

Thanks

标签: docker
3条回答
该账号已被封号
2楼-- · 2019-04-03 19:14

It is hard to answer a 'best practice' question, because it's a matter of opinion. And opinions are off topic on Stack Overflow.

So I will give a specific example of what I have done in a serious deployment.

I'm running ELK (Elasticsearch, Logstash, Kibana). It's containerised.

For my data stores, I have storage containers. These storage containers contain a local fileystem pass through:

docker create -v /elasticsearch_data:/elasticsearch_data --name ${HOST}-es-data base_image /bin/true

I'm also using etcd and confd, to dynamically reconfigure my services that point at the databases. etcd lets me store key-values, so at a simplistic level:

CONTAINER_ID=`docker run -d --volumes-from ${HOST}-es-data elasticsearch-thing`
ES_IP=`docker inspect $CONTAINER_ID | jq -r .[0].NetworkSettings.Networks.dockernet.IPAddress`
etcdctl set /mynet/elasticsearch/${HOST}-es-0

Because we register it in etcd, we can then use confd to watch the key-value store, monitor it for changes, and rewrite and restart our other container services.

I'm using haproxy for this sometimes, and nginx when I need something a bit more complicated. Both these let you specify sets of hosts to 'send' traffic to, and have some basic availability/load balance mechanisms.

That means I can be pretty lazy about restarted/moving/adding elasticsearch nodes, because the registration process updates the whole environment. A mechanism similar to this is what's used for openshift.

So to specifically answer your question:

  • DB is packaged in a container, for all the same reasons the other elements are.
  • Volumes for DB storage are storage containers passing through local filesystems.
  • 'finding' the database is done via etcd on the parent host, but otherwise I've minimised my install footprint. (I have a common 'install' template for docker hosts, and try and avoid adding anything extra to it wherever possible)

It is my opinion that the advantages of docker are largely diminished if you're reliant on the local host having a (particular) database instance, because you've no longer got the ability to package-test-deploy, or 'spin up' a new system in minutes.

(The above example - I have literally rebuilt the whole thing in 10 minutes, and most of that was the docker pull transferring the images)

查看更多
ゆ 、 Hurt°
3楼-- · 2019-04-03 19:15

It depends. A useful thing to do is to keep the database URL and password in an environment variable and provide that to Docker when running the containers. That way you will be free to connect to a database wherever it may be located. E.g. running in a container during testing and on a dedicated server in production.

查看更多
倾城 Initia
4楼-- · 2019-04-03 19:31

The best practice is to use Docker Volumes.

Official doc: Manage data in containers. This doc details how to deal with DB and container. The usual way of doing so is to put the DB into a container (which is actually not a container but a volume) then the other containers can access this DB-container (the volume) to CRUD (or more) the data.

Random article on "Understanding Docker Volumes"

edit I won't detail much further as the other answer is well made.

查看更多
登录 后发表回答