How to link Docker services across hosts?

2020-05-10 23:44发布

问题:

Docker allows servers from multiple containers to connect to each other via links and service discovery. However, from what I can see this service discovery is host-local. I would like to implement a service that uses other services hosted on a different machine.

There have been several approaches to solving this problem in Docker, such as CoreOS's jumpers, host-local services that essentially proxy to the other machine, and a whole bunch of github projects for managing Docker deployments that appear to have attempted to support this use-case.

Given the pace of development it is hard to follow what current best practices are. Therefore my question is essentially:

  1. What (if any) is the current predominant method for linking across hosts in Docker, and
  2. Are there any plans for supporting this functionality directly in the Docker system?

回答1:

Update

Docker has recently announced a new tool called Swarm for Docker orchestration.

Swarm allows you do "join" multiple docker daemons: You first create a swarm, start a swarm manager on one machine, and have docker daemons "join" the swarm manager using the swarm's identifier. The docker client connects to the swarm manager as if it were a regular docker server.

When a container started with Swarm, it is automatically assigned to a free node that meets any constraints that have been defined. The following example is taken from the blog post:

$ docker run -d -P -e constraint:storage=ssd mysql

One of the supported constraints is "node" that allows you pin a container to a specific hostname. The swarm also resolves links across nodes.

In my testing I got the impression that Swarm doesn't yet work with volumes at a fixed location very well (or at least the process of linking them is not very intuitive), so this is something to keep in mind.

Swarm is now in beta phase.


Until recently, the Ambassador Pattern was the only Docker-native approach to remote-host service discovery. This pattern can still be used and doesn't require any magic beyond plain Docker in that the pattern consists of one or more additional containers that act as proxies.

Additionally, there are several third-party extensions to make Docker cluster-capable. Third-party solutions include:

  • Connecting the Docker network bridges on two hosts, lightweight and various solutions exist, but generally with some caveats
  • DNS-based discovery e.g. with skydock and SkyDNS
  • Docker management tools such as Shipyard, and Docker orchestration tools. See this question for an extensive list: How to scale Docker containers in production


回答2:

UPDATE 3

Libswarm has been renamed as swarm and is now a separate application.

Here is the github page demo to use as a starting point:

# create a cluster
$ swarm create
6856663cdefdec325839a4b7e1de38e8

# on each of your nodes, start the swarm agent
#  <node_ip> doesn't have to be public (eg. 192.168.0.X),
#  as long as the other nodes can reach it, it is fine.
$ swarm join --token=6856663cdefdec325839a4b7e1de38e8 --addr=<node_ip:2375>

# start the manager on any machine or your laptop
$ swarm manage --token=6856663cdefdec325839a4b7e1de38e8 --addr=<swarm_ip:swarm_port>

# use the regular docker cli
$ docker -H <swarm_ip:swarm_port> info
$ docker -H <swarm_ip:swarm_port> run ... 
$ docker -H <swarm_ip:swarm_port> ps 
$ docker -H <swarm_ip:swarm_port> logs ...
...

# list nodes in your cluster
$ swarm list --token=6856663cdefdec325839a4b7e1de38e8
http://<node_ip:2375>

UPDATE 2

The official approach is now to use libswarm see a demo here

UPDATE

There is a nice gist for openvswitch hosts communication in docker using the same approach.

To allow service discovery there is an interesting approach based on DNS called skydock.

There is also a screencast.


This is also a nice article using the same pieces of the puzzle but adding also vlans on top:

http://fbevmware.blogspot.it/2013/12/coupling-docker-and-open-vswitch.html

The patching has nothing to do with the robustness of the solution. Docker is actually only a sort of DSL upon Linux Containers and both solutions in these articles simply bypass some Docker automatic settings and fall back directly to Linux Containers.

So you can use the solutions safely and wait to be able to do it in a simpler way once Docker will implement it.



回答3:

Weave is a new Docker virtual network technology that acts as a virtual ethernet switch over TCP/UDP - all you need is a Docker container running Weave on your host.

What's interesting here is

  • Instead of links, use static IPs/hostnames in your virtual network
  • Hosts don't need full connectivity, a mesh is formed based on what peers are available, and packets will be routed multi-hop to where they need to go

This leads to interesting scenarios like

  • Create a virtual network across the WAN, none of the Docker containers will know or care what actual network they sit in
  • Move your containers to different physical docker hosts, Weave will detect the peer accordingly

For example, there's an example guide on how to create a multi-node Cassandra cluster across your laptop and a few cloud (EC2) hosts with two commands per host. I launched a CoreOS cluster with AWS CloudFormation, installed weave on each in /home/core, plus my laptop vagrant docker VM, and got a cluster up in under an hour. My laptop is firewalled but Weave seemed to be okay with that, it just connects out to its EC2 peers.



回答4:

Update

Docker 1.12 contains the so called swarm mode and also adds a service abstraction. They probably aren't mature enough for every use case, but I suggest you to keep them under observation. The swarm mode at least helps in a multi-host setup, which doesn't necessarily make linking easier. The Docker-internal DNS server (since 1.11) should help you to access container names, if they are well-known - meaning that the generated names in a Swarm context won't be so easy to address.


With the Docker 1.9 release you'll get built in multi host networking. They also provide an example script to easily provision a working cluster.

You'll need a K/V store (e.g. Consul) which allows to share state across the different Docker engines on every host. Every Docker engine need to be configured with that K/V store and you can then use Swarm to connect your hosts.

Then you create a new overlay network like this:

$ docker network create --driver overlay my-network

Containers can now be run with the network name as run parameter:

$ docker run -itd --net=my-network busybox

They can also be connected to a network when already running:

$ docker network connect my-network my-container

More details are available in the documentation.



回答5:

The following article describes nicely how to connect docker containers on multiple hosts: http://goldmann.pl/blog/2014/01/21/connecting-docker-containers-on-multiple-hosts/



回答6:

It is possible to bridge several Docker subnets together using Open vSwitch or Tinc. I have prepared Gists to show how to do it:

  • Open vSwitch: https://gist.github.com/noteed/8656989
  • Tinc: https://gist.github.com/noteed/11031504

The advantage I see using this solution instead of the --link option and the ambassador pattern is that I find it more transparent: there is no need to have additional containers and more importantly, no need to expose ports on the host. Actually I think of the --link option to be a temporary hack before Docker get a nicer story about multi-host (or multi-daemon) setups.

Note: I know there is another answer pointing to my first Gist but I don't have enough karma to edit or comment on that answer.



回答7:

As mentioned above, Weave is definitely a viable solution to link Docker containers across the hosts. Based on my own experience with it, it is fairly straightfoward to set it up. It is now also has DNS service which you can address container's by its DNS names.

On the other hand, there is CoreOS's Flannel and Juniper's Opencontrail for wiring the containers across the hosts.



回答8:

Seems like docker swarm 1.14 allows you to:

  • assing hostname to container, using --hostname tag, but i haven't been able to make it work, containers are not able to ping each other by assigned hostnames.

  • assigning services to machine using --constraint 'node.hostname == <host>'



标签: docker