How to read files and stdout from a running Docker

2019-01-21 01:34发布

问题:

How would I go about starting an application in my host machine in order to read files and stdout from a running docker container?

Essentially I want to do this:

docker start containerid   
./myapp // This app will *somehow* have access files and stdout generated by the container I just stared. 

How would I go about doing that? To be more specific with where I am trying to go with this; I want to read the logs and stdout of a docker container and have those logs processed somewhere else.

I am also willing to create another docker container which can read files and stdout from another container, but I don't know if that's possible.

回答1:

The stdout of the process started by the docker container is available through the docker logs command (use -f to keep it going forever). Another option would be to stream the logs directly through the docker remote API.

For accessing log files (only if you must, consider logging to stdout or other standard solution like syslogd) your only real-time option is to configure a volume (like Marcus Hughes suggests) so the logs are stored outside the container and available for processing from the host or another container.

If you do not need real-time access to the logs, you can export the files (in tar format) with docker export



回答2:

To view the stdout, you can start the docker container with -i. This of course does not enable you to leave the started process and explore the container.

docker start -i containerid

Alternatively you can view the filesystem of the container at

/var/lib/docker/containers/containerid/root/

However neither of these are ideal. If you want to view logs or any persistent storage, the correct way to do so would be attaching a volume with the -v switch when you use docker run. This would mean you can inspect log files either on the host or attach them to another container and inspect them there.



回答3:

You can view the filesystem of the container at

/var/lib/docker/devicemapper/mnt/$CONTAINER_ID/rootfs/

and you can just

tail -f mylogfile.log


回答4:

Sharing files between a docker container and the host system, or between separate containers is best accomplished using volumes.

Having your app running in another container is probably your best solution since it will ensure that your whole application can be well isolated and easily deployed. What you're trying to do sounds very close to the setup described in this excellent blog post, take a look!



回答5:

A bit late but this is what I'm doing with journald. It's pretty powerful.

You need to be running your docker containers on an OS with systemd-journald.

docker run -d --log-driver=journald myapp

This pipes the whole lot into host's journald which takes care of stuff like log pruning, storage format etc and gives you some cool options for viewing them:

journalctl CONTAINER_NAME=myapp -f

which will feed it to your console as it is logged,

journalctl CONTAINER_NAME=myapp > output.log

which gives you the whole lot in a file to take away, or

journalctl CONTAINER_NAME=myapp --since=17:45

Plus you can still see the logs via docker logs .... if that's your preference.

No more > my.log or -v "/apps/myapp/logs:/logs" etc