I am having problem understanding the need of a separated service discovery server while we could register the slave node to the master node at the slave node start-up through whatever protocol. Hosting another service seem redundant to me.
问题:
回答1:
Docker Swarm is there to create a cluster of hosts running Docker and schedule containers across the cluster.
It does not include service discovery, which is provided by a backend service, such as etcd, consul or zookeeper.
- The first problem: service registration and discovery is an infrastructure concern, not an application concern.
- The second problem: implementing service registration and discovery when infrastructure and application implementation are mutually agnostic is tough.
The DockerCon makes that distinction clear this morning (Nov. 16th, 2015), with the "Docker Stack":
(Graphics from @laurelcomics)
Docker networking solves these problems by backing an interface (DNS) with pluggable infrastructure components that adhere to a common KV interface.
You can see consul.io used in:
- "Easy routing and service discovery with Docker, Consul and nginx"
- "Docker Overlay Networks: That was Easy"
- "Docker DNS
getaddrinfo ENOTFOUN
D"
That means:
- Consul is a KV (Key/Value) store which can be plugged into Swarm in order to manage the service discovery aspect.
- Swarm is the access layer, which is usually the layer that contains a gateway or routing component that allows others to actually reach your services.
(Image from the "Easy routing and service discovery with Docker, Consul and nginx" article written by Ladislav Gazo)
The goal is to isolate what is an infrastructure concern (Discovery service) in its own container, separate from a dev tool concern (Swarm).