Run redis in marathon (mesos) under one url

2019-01-12 11:11发布

I have problem witch start redis server on one IP address in mesos, marathon.

My steps

  • create own Dockerfile who include own redis.conf
  • I create my own docker image and pull it into docker repo (name arekmax/redis-instancje)
  • in Marathon I start my docker images - redis start and work properly. screen from my redis instancje in marathon Failover redis server in mesos also work properly - when I shut-down 192.168.18.21 server - Marathon start Redis in second or third instance.

Now I want give my developer one adress IP where they can use redis server (I don't want give them now 192.168.18.21:31822 and after failover for example 192.168.18.22:23124). I need some proxy server how will auto check actual redis IP and port.

I try use bamboo project but it work properly to port 80 - I don't know its possibility to use bamboo with redis server - i can't find information how to redirect 31822 (in my situation redis port in docker container) to for example IP 192.168.18.10:6739 (address IP 192.168.18.10 it's for my developer redis server)

Can anyone help me? what is the best solution of the problem? what kind of proxy server/instance/application I should use ?

2条回答
Ridiculous、
2楼-- · 2019-01-12 11:21

You can use marathon-lb for example, which will abstract away the ip:port lookup. Also, you could use Mesos DNS to resolve service names to ip:port mappings.

查看更多
再贱就再见
3楼-- · 2019-01-12 11:35

There are dozen solutions for performing discovery service in Mesos environment.

We can divide them into 3 groups by the way how client find services:

  1. Proxy based
    • When between clients and service sits proxy e.g., HAProxy (marathon-lb is based on it), fabio, traefik, nixy) that takes care of loadbalancing your services basing on HTTP path, header, domain e.t.c. This solution is the easy to develop and gives opportunity to tune loadbalancing based on request. On the other hand we add additional hop and as a proxy we have MitM situation.

proxy

  1. DNSlike (ask special well known endpoint for location of service)
    • Software Defined Network - we can use IP per container with SDN so each container is exposed with unique IP and presents it's services using default ports 80 for HTTP, 443 for HTTPS and so on. This is most advanced and relatively new techniqe although it uses plain old DNS to find service IP. It could be harder to intorduce then proxy but will work with any type of traffic.
    • Service record - where every container is registered in global DNS and client obtain it's IP and PORT using DNS SRV queries. Consul Mesos DNS provides this type of DNS server. Also some other protocols are based on this idea (take a look at Bonjure). It tries get best of both SDN and proxy. It's relatively easy to setup and it's protocol agnostic.

dns

  1. Other
    • Anything that doesn't fit into other types e.g. inhouse developed solution, etcd or Eureka. It could be deeply tight with infrastructure and application providing some optimizations. It's worth mention that there are some tries to use DHT based discovery service - Meta Service Discovery

You can find more details about tools that could be used for creating Discovery Service here

We can divide Discovery Services by the way they are populated with service entries:

  1. Pooling
    • Mesos/Marathon is periodically queried about state. This is how Mesos DNS is working. This is easiest method but will cause huge delay beteween service start/stop and changes gets into service discovery. This could be minimized by using healthchecking.
  2. Event based
    • Marathon has ability to push events with information about state changes (There is initative to include event bus int Mesos too — design doc. This way marathon-lb is working. Similar job is done by marathon-consul but data are passed to consul.
  3. In app/container
    • Above sollutions are asynchronus so there could be a timespan when your service discovery state is stale e.g. service started but is not ready to serve requests, or service just died. Even with healtcheck we could not assume all things happen with 0 downtime. Solution to minimize downtime is to let application register itself when it's ready to serve requests, and deregister before it stop (aka graceful shutdown).
查看更多
登录 后发表回答