I have two worker nodes: worker1 and worker2 and one swarm manager. I'm running all the services in the worker nodes only. I need to run from the manager docker exec to access some of the containers created in the worker nodes but I keep getting that the service is not recognized. I know I can run docker exec in any of the worker nodes and it works fine but I dont want to have to find on which node the service is running and then ssh to the designated node to run docker exec command. Is there a way to do so in swarm or not?
相关问题
- Docker task in Azure devops won't accept "$(pw
- Unable to run mariadb when mount volume
- Unspecified error (0x80004005) while running a Doc
- What would prevent code running in a Docker contai
- How to reload apache in php-apache docker containe
If this helps, nowadays you can create the overlay network with
--attachable
flag to enable any container to join the network. This is great feature as it allows a lot of flexibility.E.g.
Swarm mode does not currently have a way to run an exec on a running task. You need to find the container and run the exec on the host. You can configure the workers to have a TLS protected port they listen on, which would give you remote access (see docker's guide). And you can lookup the node for each task in a service by checking the output of a
docker service ps $service_name
.