I'm trying to create a Docker container that acts like a full-on virtual machine. I know I can use the EXPOSE instruction inside a Dockerfile to expose a port, and I can use the -p
flag with docker run
to assign ports, but once a container is actually running, is there a command to open/map additional ports live?
For example, let's say I have a Docker container that is running sshd. Someone else using the container ssh's in and installs httpd. Is there a way to expose port 80 on the container and map it to port 8080 on the host, so that people can visit the web server running in the container, without restarting it?
To add to the accepted answer
iptables
solution, I had to run two more commands on the host to open it to the outside world.Note: I was opening port https (443), my docker internal IP was
172.17.0.2
Note 2: These rules and temporrary and will only last until the container is restarted
You cannot do this via Docker, but you can access the container's un-exposed port from the host machine.
if you have a container that with something running on its port 8000, you can run
To get the container´s ip address, run the 2 commands:
Internally, Docker shells out to call iptables when you run an image, so maybe some variation on this will work.
to expose the container's port 8000 on your localhosts port 8001:
One way you can work this out, is to setup another container with the port mapping you want, and compare the output of the iptables-save command (though, I had to remove some of the other options that force traffic to go via the docker proxy).
NOTE: this is subverting docker, so should be done with the awareness that it may well create blue smoke
OR
Another alternative, is to look the (new? post 0.6.6?) -P option - which will use random host ports, and then wire those up.
OR
with 0.6.5, you could use the LINKs feature to bring up a new container that talks to the existing one, with some additional relaying to that container´s -p flags? (I have not used LINKs yet)
OR
with docker 0.11? you can use
docker run --net host ..
to attach your container directly to the host's network interfaces (ie, net is not name-spaced) and thus all ports you open in the container are exposed.I had to deal with this same issue and was able to solve it without stopping any of my running containers. This is a solution up-to-date as of February 2016, using Docker 1.9.1. Anyway, this answer is a detailed version of @ricardo-branco's answer, but in more depth for new users.
In my scenario, I wanted to temporarily connect to MySQL running in a container, and since other application containers are linked to it, stopping, reconfiguring, and re-running the database container was a non-starter.
Since I'd like to access the MySQL database externally (from Sequel Pro via SSH tunneling), I'm going to use port
33306
on the host machine. (Not3306
, just in case there is an outer MySQL instance running.)About an hour of tweaking iptables proved fruitless, even though:
Step by step, here's what I did:
Edit
dockerfile
, placing this inside:Then build the image:
Then run it, linking to your running container. (Use
-d
instead of-rm
to keep it in the background until explicitly stopped and removed. I only want it running temporarily in this case.)You can use an overlay network like Weave Net, which will assign a unique IP address to each container and implicitly expose all the ports to every container part of the network.
Weave also provides host network integration. It is disabled by default but, if you want to also access the container IP addresses (and all its ports) from the host, you can run simply run
weave expose
.Full disclosure: I work at Weaveworks.
Here's what I would do:
There is a handy HAProxy wrapper.
This creates an HAProxy to the target container. easy peasy.