what is the point of using pm2 and docker together

2020-05-15 13:39发布

We have been using pm2 quite successfully for running apps on our servers. We are currently moving to docker and we saw http://pm2.keymetrics.io/docs/usage/docker-pm2-nodejs/

But what is the point of actually using both together? Does not docker provide everything pm2 does?

标签: docker pm2
2条回答
干净又极端
2楼-- · 2020-05-15 14:08

update:

you may not be in favour of using pm2 inside Docker but sometimes the application requirements are different and you may need to run two nodejs application in one docker container, so in case if you want to run frontend and backend application in the same container then in the case pm2 work well then other workarounds.

So now we have pm2-runtime which run docker process in the foreground, your application will be running the foreground with pm2 andy you can expect the same result as running without pm2.

So with pm2-run time

  • You can run multiple node application in the Docker container
  • You can run the application now in the foreground
  • You can integrate with key metrics
  • You produce custom metrics
  • the same behaviour of the container as without pm2 but with pm2 have some these advantages.
  • You can now control the restart behaviour, ( if a process crash the pm2 will do auto restart, if disabled then the container will be terminated)
  • In development environment like in mounting you do not need to restart container but just restart the pm2 processes pm2 restart all , which will save time development.
FROM node:alpine
RUN npm install pm2 -g
CMD ["pm2-runtime", "app.js"]

or If you want to run multiple node application in container then you can process.yml

FROM node:alpine
RUN npm install pm2 -g
CMD ["pm2-runtime", "process.yml"]

process.yml file You can also create a Ecosystem file in YAML format. Example:

This will allow the container to run multiple nodejs processed.

apps:
  - script   : ./api.js
    name     : 'api-app'
    instances: 4
    exec_mode: cluster
  - script : ./worker.js
    name   : 'worker'
    watch  : true
    env    :
      NODE_ENV: development
    env_production:
      NODE_ENV: production

If you want to run with Keymetrics.

Keymetrics.io is a monitoring service built on top of PM2 that allows to monitor and manage applications easily (logs, restart, exceptions monitoring…). Once you created a Bucket on Keymetrics you will get a public and a secret key.

enter image description here

FROM node:alpine
RUN npm install pm2 -g
CMD ["pm2-runtime", "--public", "XXX", "--secret", "YYY", "process.yml"]

Disable Auto restart:

With this flag the container will be killed if the nodejs process killed or stoped due error or exception. As sometime we do not auto restart the process but we want to restart container.

FROM node:alpine
RUN npm install pm2 -g
CMD ["pm2-runtime","app.js","--no-autorestart"]

Without pm2 runtime

As rule of thumb only one process per container. So keeping this in mind you start your process inside the container using node start server.js as you did without docker. What will happens here if nodejs server crash? your container will be killed in this case. which one should avoid doing this.

Your container will be killed whenever the nodejs server goes down because the primary process will go down and that process should be in the foreground as being the primary process of the container.

So ultimately there is pm2 for that. This is how you can use pm2 and supervisord together to achieve that.

If you are also looking for an example, Here is the dockerfile and required config file. Using the alpine most lightweight image of 2mb.

FROM alpine:3.7
COPY supervisord.conf /etc/supervisord.conf
#installing nodejs and supervisord
RUN apk add  --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/v3.7/main/ \
    --repository http://dl-cdn.alpinelinux.org/alpine/v3.7/community/ \  
   sudo supervisor nodejs>=8 
RUN npm i pm2  -g
COPY pm2.conf  /etc/supervisord.d/pm2.conf

supervisord.conf

[unix_http_server]
file = /tmp/supervisor.sock
chmod = 0777
chown= nobody:nogroup

[supervisord]
logfile = /tmp/supervisord.log
logfile_maxbytes = 50MB
logfile_backups=10
loglevel = info
pidfile = /tmp/supervisord.pid
nodaemon = true
umask = 022
identifier = supervisor

[supervisorctl]
serverurl = unix:///tmp/supervisor.sock

[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[include]
files = /etc/supervisord.d/*.conf

pm2.conf

    [supervisord]
    nodaemon=true

    [program:pm2]
    command:pm2 start pm2_processes.yml --no-daemon
    startretries:5

查看更多
Evening l夕情丶
3楼-- · 2020-05-15 14:12

Usually there is no point in using pm2 inside of a docker.

Both PM2 and Docker are process managers and they both can do log forwarding, restart crashed workers and many other things. If you run pm2 inside of a docker container you will hide potential issues with your service, at least following:

1) If you run a single process per container with pm2 you will not gain much except for increased memory consumption. Restarts can be done with pure docker with a restart policy. Other docker based environments (like ECS or Kubernetes) can also do it.

2) If you run multiple processes you will make monitoring harder. CPU/Memory metrics are no longer directly available to your enclosing environment.

3) Health checks requests for a single PM2 process will be distributed across workers which is likely to hide unhealthy targets

4) Worker crashes are hidden by pm2. You will hardly ever know about them from your monitoring system (like CloudWatch).

5) Load balancing becomes more complicated since you're virtually going to have multiple levels of load balancing.

Also running multiple processes inside of a docker container contradicts the philosophy of docker to keep a single process per container.

One scenario I can think of is if you have very limited control over your docker environment. In this case running pm2 may be the only option to have control over worker scheduling.

查看更多
登录 后发表回答