Background
My Environment - Java, Play2, MySql
I've written 3 stateless Restful Microservices on Play2 -> /S1,/S2,/S3
S1 consumes data from S2 and S3. So when user hits /S1, that service asynchronously calls /S2, /S3, merges data and returns final json output. Side note - The services will be shipped eventually as docker images.
For testing in developer environment, I run /s1,/s2,/s3 on ports 9000, 9001 and 9002 respectively. I pickup the port numbers from a config file etc. I hit the services and everything works fine. But there is a better way to setup the test env on my developer box correct? Example - What if I want to run 20 services etc..
So with that said, on production they will be called just like mydomain.com/s1, mydomain.com/s2, mydomain.com/s3 etc. I want to accomplish this on my developer environment box....I guess there's some reverse proxying involved I would imagine.
Question
So the question is, how do I call /S2 and /S3 from within S1 without specifying or using the port number on developer environment. How are people testing microservices on their local machine?
Extra Bonus
Knowing that I'll be shipping my services as docker images, how do I accomplish the same thing with docker containers (each container running one service)
The easiest way (IMO) is to set up your development environment to mirror as closely as possible your production environment. If you want your production application to work with 20 microservices, each running in a separate container, then do that with your development machine. That way, when you deploy to production, you don't have to change from using ports to using hostnames.
The easiest way to set up a large set of microservices in a bunch of different containers is probably with Fig or with Docker's upcoming integrated orchestration tools. Since we don't have all the details on what's coming, I'll use Fig. This is a fig.yml
file for a production server:
application:
image: application-image
links:
- service1:service1
- service2:service2
- service3:service3
...
service1:
image: service1-image
...
service2:
image: service2-image
...
service3:
image: service3-image
...
This abbreviated fig.yml
file will set up links between the application and all the services so that in your code, you can refer to them via hostname service1
, service2
, etc.
For development purposes, there's lots more that needs to go in here: for each of the services you'll probably want to mount a directory in which to edit the code, you may want to expose some ports so you can test services directly, etc. But at it's core, the development environment is the same as the production environment.
It does sound like a lot, but a tool like Fig makes it really easy to configure and run your application. If you don't want to use Fig, then you can do the same with Docker commands - the key is the links between containers. I'd probably create a script to set things up for both the production and development environments.
Example - What if I want to run 20 services etc
docker will create entries in etc hosts file for each of the linked containers using their alias. So If you link a lot of containers you can just address them using their alias name. Do not map the port to a public port using -p 9000:9000. This way all your services can be on port 9000 on their own docker host which can be looked up from /etc/hosts
"how do I accomplish the same thing..."
This is an open question, here is some good reading on the topic. SkyDock and SkyDNS get you most of the way of service discovery and weave gets you easy to use networking between remote docker containers. I have not seen a better end-to-end solution yet although there may be some out there.