I have a Webapp running completely locally on my MacBook.
The Webapp has a Front End (Angular/Javascript) and a Back End (Python/Django) which implements a RESTful API.
I have Dockerized the Back End so that it is completely self-contained in a Docker Container and exposes port 8000. I map this port locally to 4026.
Now I need to Dockerize the Front End. But if I have these two docker containers running on my localhost, how can I get the FE to send HTTP requests to the BE? The FE container won't know anything that exists outside of it. Right?
This is how I run the FE:
$ http-server
Starting up http-server, serving ./
Available on:
http://127.0.0.1:8080
http://192.168.1.16:8080
Hit CTRL-C to stop the server
Please provide references explaining how I can achieve this.
The way to do this is today Docker Networking, which another answer briefly mentioned.
The short version is that you can run docker network ls
to get a listing of your networks. By default, you should have one called bridge
. You can either create a new one or use this one by passing --net=bridge
when creating your container. From there, containers launched with the same network can communicate with each other over exposed ports.
If you use Docker Compose as has been mentioned, it will create a bridge network for you when you run docker-compose up
with a name matching the folder of your project and _default
appended. Each image defined in the Compose file will get launched in this network automatically.
With all that said, I'm guessing your frontend is a webserver that just serves up the HTML/JS/CSS and those pages access the backend service. If that's accurate, you don't really need container-to-container communication in this case anyway... both need to be exposed to the host since connections originate from the client system.
There are multiple ways to do this and the simplest answer is to use
Docker-Compose
. You can use Docker-compose to allow multiple services to run a server.
If you are not using Docker-Compose and running individual container than expose both services port with host and then use those services on such on links like:
docker run -p 3306:3306 mysql
docker run -p 8088:80 nginx
Now you can communicate as:
http://hostip:3306
http://hostip:8088
etc.
Now you can communicate with container using hostIP.
I think the most elegant solution would be to create a software defined network, but for this simple example it may be a bit overkill. Nevertheless when you think about running things in production, maybe even on different servers, this is the way to go.
Until then, you may opt to link the containers. E.g., when you used to start your frontend container like this:
$ docker run -p 8080:8080 --name frontend my-frontend
You now could do it like this:
$ docker run -p 8080:8080 --name frontend --link backend:backend my-frontend
The trick here is to also start the backend container and giving it a name using the --name
flag. Then, you can refer to this name in the --link
flag and access the backend from within the frontend container using its name (--link
takes care of automatically adding the linked container to the /etc/hosts
file).
This way you do not have to rely on a specific IP address, be it the host's one or whatever.
Hope this helps :-)