Is it best practice to daemonize a process within

2019-04-13 23:12发布

Many best practice guides emphasize making your process a daemon and having something watch it to restart in case of failure. This made sense for a while. A specific example can be sidekiq.

bundle exec sidekiq -d

However, with Docker as I build I've found myself simply executing the command, if the process stops or exits abruptly the entire docker container poofs and a new one is automatically spun up - basically the entire point of daemonizing a process and having something watch it (All STDOUT is sent to CloudWatch / Elasticsearch for monitoring).

I feel like this also tends to re-enforce the idea of a single process in a docker container, which if you daemonize would tend to in my opinion encourage a violation of that general standard.

Is there any best practice documentation on this even if you're running only a single process within the container?

标签: docker
4条回答
啃猪蹄的小仙女
2楼-- · 2019-04-13 23:31

There are multiple run supervisors that can help you take a foreground process (or multiple ones) run them monitored and restart them on failure (or exit the container).

one is runit (http://smarden.org/runit/), which I have not used myself.

my choice is S6 (http://skarnet.org/software/s6/). someone already built a container envelope for it, named S6-overlay (https://github.com/just-containers/s6-overlay) which is what I usually use if/when I need to have a user-space process run as daemon. it also has facets to do prep work on container start, change permissions and more, in runtime.

查看更多
Ridiculous、
3楼-- · 2019-04-13 23:39

You don't daemonize a process inside a container.

The -d is usually seen in the docker run -d command, using a detached (not daemonized) mode, where the the docker container would run in the background completely detached from your current shell.

For running multiple processes in a container, the background one would be a supervisor.
See "Use of Supervisor in docker" (or the more recent docker --init).

查看更多
爷的心禁止访问
4楼-- · 2019-04-13 23:40

Some relevent 12 Factor app recommendations

Website:

https://12factor.net/

Docker was open sourced by a PAAS operator (dotCloud) so it's entirely possible the authors were influenced by this architectural recommendation. Would explain why Docker is designed to normally run a single process.

The thing to remember here is that a Docker container is not a virtual machine, although it's entirely possible to make it quack like one. In practice a docker container is a jailed process running on the host server. Container orchestration engines like Kubernetes (Mesos, Docker Swarm mode) have features that will ensure containers stay running, replacing them should the need arise.

Remember my mention of duck vocalization? :-) If you want your container to run multiple processes then it's possible to run a supervisor process that keeps everything healthy and running inside (A container dies when all processes stop)

https://docs.docker.com/engine/admin/using_supervisord/

The ultimate expression of this VM envy would be LXD from Ubuntu, here an entire set of VM services get bootstrapped within LXC containers

https://www.ubuntu.com/cloud/lxd

In conclusion is it a best practice? I think there is no clear answer. Personally I'd say no for two reasons:

  1. I'm fixated on deploying 12 factor compliant applications, so married to the single process model
  2. If I need to run two processes on the same set of data, then in Kubernetes I can run containers within the same POD... Means Kubernetes manages the processes (running as separate containers with a common data volume).

Clearly my reasons are implementation specific.

查看更多
疯言疯语
5楼-- · 2019-04-13 23:49

tl;dr: I can't find a best practices document that relates directly to this for docker, but I agree with you.

The only best "Best Practices" for docker I could find was at dockers own site, which states that containers should be one process. In my mind, that means foregrounded processes as well. So basically, I've drawn the same conclusion as you. (You've probably read that too, but this is for anyone else reading this).

Honestly, I think we are still in (relatively) new territory with best practices for docker. Anecdotally, it has been a best practice in the organizations I've worked with. The number of times I've felt more satisfied with a foregrounded process has been significantly greater then the times I've said to myself "Boy, I sure wish I backgrounded that one." In fact, I don't think I've ever said that.

The only exception I can think of is when you are trying to evaluate software and need a quick and dirty way to ship infrastructure off to someone. EG: "Hey, there is this new thing called LAMP stacks I just heard of, here is a docker container that has all the components for you to play around with". Again, though, that's an outlier and I would shudder if something like that ever made it to production or even any sort of serious development environment.

Additionally, it certainly forces a micro-architecture style, which I think is ultimately a good thing.

查看更多
登录 后发表回答