Is it possible to execute commands on a docker swarm cluster hosted in cloud from my local mac? If yes, how?
I want to execute command such as following on docker swarm from my local:
docker create secret my-secret <address to local file>
docker service create --name x --secrets my-secret image
Answer to the question can be found here.
What one needs to do for ubuntu machine is define daemon.json file at path
/etc/docker
with following content:The above configuration is unsecured and shouldn't be used if server is publicly hosted.
For secured connection use following config:
Details for generating certificate can be found here as mentioned by @BMitch.
If you start from scratch, you can create the manager node using a generic docker-machine driver. Afterwards you will be able to connect to that docker engine from your local machine with the help of
docker-machine env
command.One option is to provide direct access to the docker daemon as suggested in the previous answers, but that requires setting up TLS certificates and keys, which can itself be tricky and time consuming. Docker machine can automate that process, when docker machine has been used to create the nodes.
I had the same problem, in that I wanted to create secrets on the swarm without uploading the file containing the secret to the swarm manager. Also, I wanted to be able to run the deploy stackfile (e.g. docker-compose.yml) without the hassle of first uploading the stackfile.
I wanted to be able to create the few servers I needed on e.g. DigitalOcean, not necessarily using docker machine, and be able to reproducibly create the secrets and run the stackfile. In environments like DigitalOcean and AWS, a separate set of TLS certificates is not used, but rather the ssh key on the local machine is used to access the remote node over ssh.
The solution that worked for me was to run the docker commands using individual commands over
ssh
, which allows me to pipe the secret and/or stackfile using stdin.To do this, you first need to create the DigitalOcean droplets and get docker installed on them, possibly from a custom image or snapshot, or simply running the commands to install docker on each droplet. Then, join the droplets into a swarm:
ssh
into the one that will be the manager node, typedocker swarm init
(possibly with the--advertise-addr
option if there is more than one IP on that node, such as when you want to keep intra-swarm traffic on the private network) and get back the join command for the swarm. Thenssh
into each of the other nodes and issue the join command, and your swarm is created.Then, export the
ssh
command you will need to issue commands on the manager node, likeNow, you have a couple of options. You can issue individual docker commands like:
You can create a secret on your swarm without copying the secret file to the swarm manager:
(Using
-
to indicate thatdocker create secret
should accept the secret on stdin, and then piping the file to stdin using ssh)You can also create a script to be able to reproducibly run commands to create your secrets and bring up your stack with secret files and stackfiles only on your local machine. Such a script might be:
where secrets are used for the certs and keys, and also for the configuration files rabbitmq.config and enabled_plugins, and the stackfile is rabbitmq.yml, which could be:
(Here, the rabbitmq.config file sets up the ssh listening ports for stomp, amqp, and the monitoring interface, and tells rabbitmq to look for the certs and key within /run/secrets. Another alternative for this specific image would be to use the environment variables provided by the image to point to the secrets files, but I wanted a more generic solution that did not require configuration within the image)
Now, if you want to bring up another swarm, your script will work with that swarm once you have set the
SSH_CMD
environment variable, and you need neither set up TLS nor copy your secret or stackfiles to the swarm filesystem.So, this doesn't solve the problem of creating the swarm (whose existence was presupposed by your question), but once it is created, using an environment variable (exported if you want to use it in scripts) will allow you to use almost exactly the commands you listed, prefixed with that environment variable.
To connect to a remote docker node, you should setup TLS on both the docker host and client signed from the same CA. Take care to limit what keys you sign with this CA since it is used to control access to the docker host.
Docker has documented the steps to setup a CA and create/install the keys here: https://docs.docker.com/engine/security/https/
Once configured, you can connect to the newer swarm mode environments using the same docker commands you run locally on the docker host just by changing the value of $DOCKER_HOST in your shell.