I develop distributed system and I consider situation: My application is working in docker that is running on host A and I want to call api from other service that is running on physical host B (without docker). Could I do it calling an IP or DNS address?
Other situation that concerns above problem:
I develop distributed system locally using docker-compose and I define there services: ServiceA, ServiceB and so on. If ServiceA has to call to ServiceB on port 8080 I call http://ServiceB:8080/
and it works fine. In production each service should work on different host with different IP. So is it good way I will run each service on different host and I will call from ServiceA to ServiceB by http://<IP_of_ServiceB>:8080
instead of using service name?
One infrastructure piece you may find useful is a service registry that knows which hosts are running which services. These generally provide DNS services, so you can refer to a service by its "hostname" and have it routed to the actual host (or hosts) running the service. The idea here is that you refer to both Docker and non-Docker services by a (possibly artificial) hostname, and the infrastructure layer routes it to the correct hosts. Except for the Kubernetes option, these don't require special networking setup beyond pointing at the right DNS server.
Three specific examples I've used before:
On AWS (if you're using/paying for that already), you can set up a load balancer pointing at every node, with a health check; if a node happens to be serving (for instance) port 9123 then the load balancer's port 80 will route there. Then you can set up a DNS name that points at the load balancer. For each service, launch it on a specific known port on as many nodes as you care to, and create a load balancer and DNS name for each. (You can also create DNS CNAME records that are aliases for external service hosts.) This happens to be a standard setup for ECS but isn't tied to that service specifically; in fact, if you can provide your own load balancer and name server, you can use this approach anywhere.
Hashicorp's Consul is intended (among other things) as a service registry. In the mode I've used it in the past, you'd install a Consul agent on each node, and on each node install a set of health checks that looks for every known service. Configure your Docker containers to point at the Consul DNS server. Host names like
servicename.service.consul
will resolve to the IP addresses of the host(s) running the service, and then you can refer to URLs likehttp://servicename.service.consul:9123/
with the service's port. (I believe you can override a service's address in its definition to point it at an external server.)In Kubernetes, there is a standard Service object that provides load balancing and DNS. You can configure the Service to listen on some port and route to some other port on some set of Pods (which in turn run Docker containers). You'd use host names like
servicename.default.svc.cluster.local
, but most of this also winds up in your default DNS search path, so justhttp://servicename/
is often a fine URL. (You can configure anExternalName
service that's just a DNS record pointing outside the cluster.)First question, yes, just make sure ports are exposed and reachable from the other host.
Second question, you can still do that (write down the ip of host B in an env or so). Or, you may want to consider using docker swarm to deploy your production stack. Combined with an overlay network, which makes your 2 hosts act like one, you can continue call like
http://ServiceB:3000/