Currently we are redirecting all application logs to stdout from multiple containers and collect /var/log/message via rsyslog in host to ELK stack.
All docker container logs shows as docker/xxxxxxxx, we can't tell which application is this log for, anyway we can easily differentiate applications from multiple container logs from docker stdout?
Here is a script tailing all docker containers.
Based on the answer by @nate, but a bit shorter. Tested on CentOS.
Have you looked into fluentd? It may be what you need.
(Instructions for OS X but should work in Linux)
There doesn't appear to be a way to do this with a docker command, however in bash you can run multiple commands at the same time, and with
sed
you can prefix with your container name.And you will see output from both containers at the same time.
To tail all your containers at once:
And to stop them run
fg
and then pressctrl+c
for each container.Update: Thanks to @Flo-Woo for Ubuntu 16.04 support
Why are you relying on /var/log/messages logs for your application logs? In my opinion your application logs should be independent.
Say you have a java, ruby, python, node, golang app (Whatever), then you can pump the logs in the container into something like /var/log/myapp/myapp.log. The run your log forwarder in your container to ship to ELK everything under /var/log/myapp/myapp.log
Generally the shipper will show the hostname as your container_id based on the
HOSTNAME
env variable. For example:You can also use something like Beaver or log-courier to ship your logs.
You can rotate your logs and get rid of old logs if concerned about disk space.
So if you want to use the
docker logs
command to redirect to STDOUT and STDERR, you are going to have your application write something the log that identifies the container/application. (Container could be the hostname again) But you can redirect to/var/log/app/application.log
on the host machine. Something like:Don't think there's any other way...
You can also switch to Fluentd instead of Logstash as another option.