Logging solution for multiple containers running o

2020-06-03 03:05发布

问题:

Currently we are redirecting all application logs to stdout from multiple containers and collect /var/log/message via rsyslog in host to ELK stack.

All docker container logs shows as docker/xxxxxxxx, we can't tell which application is this log for, anyway we can easily differentiate applications from multiple container logs from docker stdout?

回答1:

(Instructions for OS X but should work in Linux)

There doesn't appear to be a way to do this with a docker command, however in bash you can run multiple commands at the same time, and with sed you can prefix with your container name.

docker logs -f --tail=30 container1 | sed -e 's/^/[-- containerA1 --]/' &
docker logs -f --tail=30 container2 | sed -e 's/^/[-- containerM2 --]/' &

And you will see output from both containers at the same time.

[-- containerA1 --] :: logging line
[-- containerA1 --] :: logging line
[-- containerM2 --] :: logging line
[-- containerM2 --] :: logging line
[-- containerA1 --] :: logging line
[-- containerA1 --] :: logging line
[-- containerM2 --] :: logging line
[-- containerM2 --] :: logging line

To tail all your containers at once:

#!/bin/bash

names=$(docker ps --format "{{.Names}}")
echo "tailing $names"

while read -r name
do
  # eval to show container name in jobs list
  eval "docker logs -f --tail=5 \"$name\" | sed -e \"s/^/[-- $name --] /\" &"
  # For Ubuntu 16.04
  #eval "docker logs -f --tail=5 \"$name\" |& sed -e \"s/^/[-- $name --] /\" &"
done <<< "$names"

function _exit {
  echo
  echo "Stopping tails $(jobs -p | tr '\n' ' ')"
  echo "..."

  # Using `sh -c` so that if some have exited, that error will
  # not prevent further tails from being killed.
  jobs -p | tr '\n' ' ' | xargs -I % sh -c "kill % || true"

  echo "Done"
}

# On ctrl+c, kill all tails started by this script.
trap _exit EXIT

# For Ubuntu 16.04
#trap _exit INT

# Don't exit this script until ctrl+c or all tails exit.
wait

And to stop them run fg and then press ctrl+c for each container.

Update: Thanks to @Flo-Woo for Ubuntu 16.04 support



回答2:

Here is a script tailing all docker containers.

Based on the answer by @nate, but a bit shorter. Tested on CentOS.

#!/bin/bash

function _exit {
  kill $(jobs -p)
}

trap _exit EXIT

for name in $(docker ps --format "{{.Names}}"); do
  eval "docker logs -f --tail=5 \"$name\" | sed -e \"s/^/[-- $name --] /\" &";
done

wait


回答3:

Have you looked into fluentd? It may be what you need.



回答4:

Why are you relying on /var/log/messages logs for your application logs? In my opinion your application logs should be independent.

Say you have a java, ruby, python, node, golang app (Whatever), then you can pump the logs in the container into something like /var/log/myapp/myapp.log. The run your log forwarder in your container to ship to ELK everything under /var/log/myapp/myapp.log

Generally the shipper will show the hostname as your container_id based on the HOSTNAME env variable. For example:

[user@1dfab5ea15cd ~]# env | grep HOSTNAME
HOSTNAME=1dfab5ea15cd
[user@1dfab5ea15cd ~]#

You can also use something like Beaver or log-courier to ship your logs.

You can rotate your logs and get rid of old logs if concerned about disk space.

So if you want to use the docker logs command to redirect to STDOUT and STDERR, you are going to have your application write something the log that identifies the container/application. (Container could be the hostname again) But you can redirect to /var/log/app/application.log on the host machine. Something like:

containerid/<hostname>-application: INFO: <message>

Don't think there's any other way...

You can also switch to Fluentd instead of Logstash as another option.