How can I remotely connect to docker swarm?

2019-02-13 11:07发布

Is it possible to execute commands on a docker swarm cluster hosted in cloud from my local mac? If yes, how?

I want to execute command such as following on docker swarm from my local:

docker create secret my-secret <address to local file>
docker service create --name x --secrets my-secret image

4条回答
Explosion°爆炸
2楼-- · 2019-02-13 11:37

Answer to the question can be found here.

What one needs to do for ubuntu machine is define daemon.json file at path /etc/docker with following content:

  {
    "hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]
  }

The above configuration is unsecured and shouldn't be used if server is publicly hosted.

For secured connection use following config:

{
  "tls": true,
  "tlscert": "/var/docker/server.pem",
  "tlskey": "/var/docker/serverkey.pem",
  "hosts": ["tcp://x.x.x.y:2376", "unix:///var/run/docker.sock"]
}

Details for generating certificate can be found here as mentioned by @BMitch.

查看更多
爱情/是我丢掉的垃圾
3楼-- · 2019-02-13 11:37

If you start from scratch, you can create the manager node using a generic docker-machine driver. Afterwards you will be able to connect to that docker engine from your local machine with the help of docker-machine env command.

查看更多
爱情/是我丢掉的垃圾
4楼-- · 2019-02-13 11:39

One option is to provide direct access to the docker daemon as suggested in the previous answers, but that requires setting up TLS certificates and keys, which can itself be tricky and time consuming. Docker machine can automate that process, when docker machine has been used to create the nodes.

I had the same problem, in that I wanted to create secrets on the swarm without uploading the file containing the secret to the swarm manager. Also, I wanted to be able to run the deploy stackfile (e.g. docker-compose.yml) without the hassle of first uploading the stackfile.

I wanted to be able to create the few servers I needed on e.g. DigitalOcean, not necessarily using docker machine, and be able to reproducibly create the secrets and run the stackfile. In environments like DigitalOcean and AWS, a separate set of TLS certificates is not used, but rather the ssh key on the local machine is used to access the remote node over ssh.

The solution that worked for me was to run the docker commands using individual commands over ssh, which allows me to pipe the secret and/or stackfile using stdin.

To do this, you first need to create the DigitalOcean droplets and get docker installed on them, possibly from a custom image or snapshot, or simply running the commands to install docker on each droplet. Then, join the droplets into a swarm: ssh into the one that will be the manager node, type docker swarm init (possibly with the --advertise-addr option if there is more than one IP on that node, such as when you want to keep intra-swarm traffic on the private network) and get back the join command for the swarm. Then ssh into each of the other nodes and issue the join command, and your swarm is created.

Then, export the ssh command you will need to issue commands on the manager node, like

export SSH_CMD='ssh root@159.89.98.121'

Now, you have a couple of options. You can issue individual docker commands like:

$SSH_CMD docker service ls

You can create a secret on your swarm without copying the secret file to the swarm manager:

$SSH_CMD docker create secret my-secret - < /path/to/local/file
$SSH_CMD docker service create --name x --secrets my-secret image

(Using - to indicate that docker create secret should accept the secret on stdin, and then piping the file to stdin using ssh)

You can also create a script to be able to reproducibly run commands to create your secrets and bring up your stack with secret files and stackfiles only on your local machine. Such a script might be:

$SSH_CMD docker secret create rabbitmq.config.01 - < rabbitmq/rabbitmq.config
$SSH_CMD docker secret create enabled_plugins.01 - < rabbitmq/enabled_plugins
$SSH_CMD docker secret create rmq_cacert.pem.01 - < rabbitmq/cacert.pem
$SSH_CMD docker secret create rmq_cert.pem.01 - < rabbitmq/cert.pem
$SSH_CMD docker secret create rmq_key.pem.01 - < rabbitmq/key.pem
$SSH_CMD docker stack up -c - rabbitmq_stack < rabbitmq.yml

where secrets are used for the certs and keys, and also for the configuration files rabbitmq.config and enabled_plugins, and the stackfile is rabbitmq.yml, which could be:

version: '3.1'
services:
  rabbitmq:
    image: rabbitmq
    secrets:
      - source: rabbitmq.config.01
        target: /etc/rabbitmq/rabbitmq.config
      - source: enabled_plugins.01
        target: /etc/rabbitmq/enabled_plugins
      - source: rmq_cacert.pem.01
        target: /run/secrets/rmq_cacert.pem
      - source: rmq_cert.pem.01
        target: /run/secrets/rmq_cert.pem
      - source: rmq_key.pem.01
        target: /run/secrets/rmq_key.pem
    ports: 
      # stomp, ssl:
      - 61614:61614
      # amqp, ssl:
      - 5671:5671
      # monitoring, ssl:
      - 15671:15671
      # monitoring, non ssl:
      - 15672:15672
  # nginx here is only to show another service in the stackfile
  nginx:
    image: nginx
    ports: 
      - 80:80
secrets:
  rabbitmq.config.01:
    external: true
  rmq_cacert.pem.01:
    external: true
  rmq_cert.pem.01:
    external: true
  rmq_key.pem.01:
    external: true
  enabled_plugins.01:
    external: true

(Here, the rabbitmq.config file sets up the ssh listening ports for stomp, amqp, and the monitoring interface, and tells rabbitmq to look for the certs and key within /run/secrets. Another alternative for this specific image would be to use the environment variables provided by the image to point to the secrets files, but I wanted a more generic solution that did not require configuration within the image)

Now, if you want to bring up another swarm, your script will work with that swarm once you have set the SSH_CMD environment variable, and you need neither set up TLS nor copy your secret or stackfiles to the swarm filesystem.

So, this doesn't solve the problem of creating the swarm (whose existence was presupposed by your question), but once it is created, using an environment variable (exported if you want to use it in scripts) will allow you to use almost exactly the commands you listed, prefixed with that environment variable.

查看更多
做个烂人
5楼-- · 2019-02-13 11:50

To connect to a remote docker node, you should setup TLS on both the docker host and client signed from the same CA. Take care to limit what keys you sign with this CA since it is used to control access to the docker host.

Docker has documented the steps to setup a CA and create/install the keys here: https://docs.docker.com/engine/security/https/

Once configured, you can connect to the newer swarm mode environments using the same docker commands you run locally on the docker host just by changing the value of $DOCKER_HOST in your shell.

查看更多
登录 后发表回答